Scott Shadley, leadership narrative director and technology evangelist at Solidigm, which develops solid state drives (SSDs) for enterprises, says storage has always been the “undervalued child” of computing architecture.
However, AI has brought about a step change in the amount and speed of data collected and processed every day. “Even just five years ago, if you had a petabyte of data, you only kept 100 terabytes of it. Now we want to keep everything,” Shadley said.
Historically, storage decisions have been primarily based on cost per gigabyte. Nearly 90% of data center storage still relies on older hard disk drives (HDDs), which are cheaper to purchase than higher-performance SSDs.
However, HDDs have struggled to keep up with AI workflows, drawing renewed attention to how they store vast amounts of data. After all, ultra-fast GPUs can only run as fast as the data can reach.
dollars per terabyte
HDDs are “a marvel of engineering, to be honest,” Shadley says. HDDs were once measured in dollars per gigabyte. But the aging technology has become so efficient that drives now cost about $0.011 per GB. Dollars per terabyte is the only practical calculation.
And the HDD may remain for some time. Shadley cited the CERN tape archive, which stores the vast amounts of data produced by the Large Hadron Collider, as an example of how old storage technologies remain important even when they are technologically replaced.
However, the assumption that HDDs are the most cost-effective method of large data storage is beginning to erode. In a recent white paper, “The Economics of Exabyte Storage,” Solidigm demonstrated that storing 1 exabyte (1 million terabytes) reduces the total cost of ownership (TCO) of an SSD over a 10-year period.
Although SSDs can have a higher initial cost, they are more cost-effective in the long run, taking up less space, consuming less energy, and increasing reliability.
And when performance is an issue, even the slowest SSDs can outperform the fastest HDDs, providing speeds that some data-intensive workflows can’t function without.
real-time performance
One of the many areas of research at Los Alamos National Laboratory (LANL) is simulating seismic activity from underground nuclear explosions so that weapons tests can be detected around the world.
This process generates incredible amounts of data that require near-instantaneous capture and often simultaneous analysis. HDDs simply cannot handle this type of intensive read/write workflow.
When reading, the drive’s disk head must locate the data on the platter and rotate to that location to retrieve the data. This results in a delay that varies depending on the location of the data. Also, writing requires rotating again to find a blank area.
This isn’t necessarily a problem for slower big data workflows, such as long-tail analysis of traffic camera footage. But Shadley says, “That speed is not fast enough for what the world’s AI factories will need in the future.”
Processes like the LANL experiment cannot work without SSDs that can write and read in parallel at near real-time and predictable speeds.
This is a glimpse into the types of data processing where AI is becoming commonplace, and will accelerate as the technology matures and requires better storage solutions.
Evolving data storage
“Hard drives are hitting a wall from a capacity standpoint,” Shadley said. Currently, the largest HDD is around 30 TB, which is expected to increase to 100 TB by 2030.
But Solidigm is already shipping a 122TB SSD, which is physically smaller and has plenty of room for densification, or more storage in the same space. Or an entirely new form factor.
For example, Solidigm worked with NVIDIA to tackle eSSD water cooling challenges, “addressing issues such as hot-swap capabilities and the limitations of single-sided cooling,” Shadley said.
The resulting product is a “liquid-cooled, direct-to-chip, cold plate, hot-pluggable SSD.” [doesn’t] It takes no extra footprint within the server,” Shadley said.
It is the first cold plate cooled enterprise SSD available in a reference architecture that was demonstrated at NVIDIA’s annual GPU Technology Conference (GTC) in March 2025.
Other innovations are on the horizon. Solidigm is working with many OEMs on solutions where speed is not a priority, but where SSD reliability, small footprint, and reduced power consumption are advantageous.
One of the main benefits is that resources can be freed and redirected elsewhere. Replacing HDDs with SSDs in data centers can reduce rack space by 90% and provide up to 77% power savings, increasing the number of watts available for GPUs, for example.
Keep up with AI advances
After all, serving GPUs is the big challenge in AI computing. If everything upstream doesn’t keep up, the GPU won’t reach its full potential.
“We need to start paying more attention to the lakes of data that just happen to be sitting in our storage,” Shadley said. After all, the lake is the starting point for the pipeline.
Learn more about how to ensure your data infrastructure is built on a solid foundation.
