VentureBeat 10月13日 23:04
AI时代存储新挑战:SSD如何超越HDD
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着AI应用的爆炸式增长,数据中心面临存储瓶颈。传统HDD在从冷数据到温数据的转变中显得力不从心,无法满足AI模型训练和推理对低延迟、高吞吐量的需求。文章指出,虽然HDD仍是冷存储的成本优势选择,但高容量SSD正成为解决AI工厂瓶颈的关键。SSD不仅在性能上远超HDD,还在能效、空间占用和环境可持续性方面带来显著优势,推动数据中心架构向更高效、更具扩展性的方向发展。

💡 AI工作负载的演变对存储提出新要求:AI应用的发展使得原本的冷数据(如存档数据)被频繁调用以训练模型和优化推理结果。这种“冷数据”向“温数据”的转变,要求存储系统具备低延迟、高吞吐量的能力,以支持并行计算。传统的HDD因其机械结构,在处理此类高并发、低延迟请求时存在固有的性能瓶颈。

⚡ SSD成为AI数据中心的高效解决方案:与HDD相比,SSD凭借其固态电子结构,能够提供极高的IOPS(每秒输入输出操作数)和更低的延迟。这使得SSD特别适合AI工作负载,能够最大化GPU利用率,并实现更快的模型训练和推理速度。此外,SSD在能耗、空间占用和散热方面也表现出显著优势,有助于降低数据中心的运营成本和碳排放。

🏢 SSD推动数据中心架构的根本性变革:高容量SSD不仅是硬件升级,更是AI时代数据基础设施设计的结构性转变。它们通过提供巨大的性能、效率和密度提升,释放了更多电力和空间来扩展GPU规模。这种转变有助于数据中心在有限的资源(如电力和空间)下实现更大的AI算力,并能显著减少部署和维护的物理空间及数量,从而降低长期运营成本和环境影响。

🌍 SSD在可持续性方面的优势:相较于HDD,SSD在数据中心占用的物理空间更小,从而减少了对建筑材料(如混凝土和钢材)的需求,间接降低了相关的温室气体排放。在驱动器的报废处理阶段,SSD的数量也大幅减少,进一步减轻了环境负担。

Presented by Solidigm


As AI adoption surges, data centers face a critical bottleneck in storage — and traditional HDDs are at the center of it. Data that once sat idle as cold archives is now being pulled into frequent use to build more accurate models and deliver better inference results. This shift from cold data to warm data demands low-latency, high-throughput storage that can handle parallel computations. HDDs will remain the workhorse for low-cost cold storage, but without rethinking their role, the high-capacity storage layer risks becoming the weakest link in the AI factory.

"Modern AI workloads, combined with data center constraints, have created new challenges for HDDs," says Jeff Janukowicz, research vice president at IDC. "While HDD suppliers are addressing data storage growth by offering larger drives, this often comes at the expense of slower performance. As a result, the concept of 'nearline SSDs' is becoming an increasingly relevant topic of discussion within the industry."

Today, AI operators need to maximize GPU utilization, manage network-attached storage efficiently, and scale compute — all while cutting costs on increasingly scarce power and space. In an environment where every watt and every square inch counts, says Roger Corell, senior director of AI and leadership marketing at Solidigm, success requires more than a technical refresh. It calls for a deeper realignment.

“It speaks to the tectonic shift in the value of data for AI,” Corell says. “That’s where high-capacity SSDs come into play. Along with capacity, they bring performance and efficiency -- enabling exabyte-scale storage pipelines to keep pace with the relentless pace of data set size. All of that consumes power and space, so we need to do it as efficiently as possible to enable more GPU scale in this constrained environment.”

High-capacity SSDs aren’t just displacing HDDs — they’re removing one of the biggest bottlenecks on the AI factory floor. By delivering massive gains in performance, efficiency, and density, SSDs free up the power and space needed to push GPU scale further. It’s less a storage upgrade than a structural shift in how data infrastructure is designed for the AI era.

HDDs vs. SDDs: More than just a hardware refresh

HDDs have impressive mechanical designs, but they're made up of many moving parts that at scale use more energy, take up more space, and fail at a higher rate than solid state drives. The reliance on spinning platters and mechanical read/write heads inherently limits Input/Output Operations Per Second (IOPS), creating bottlenecks for AI workloads that demand low latency, high concurrency, and sustained throughput.

HDDs also struggle with latency-sensitive tasks, as the physical act of seeking data introduces mechanical delays unsuited for real-time AI inference and training. Moreover, their power and cooling requirements increase significantly under frequent and intensive data access, reducing efficiency as data scales and warms.

In contrast, the SSD-based VAST storage solution reduces energy usage by ~$1M a year, and in an AI environment where every watt matters, this is a huge advantage for SSDs. To demonstrate, Solidigm and VAST Data completed a study examining the economics of data storage at exabyte scale — a quadrillion bytes, or a billion gigabytes, with an analysis of storage power consumption versus HDDs over a 10-year period.

As a starting reference point, you’d need four 30TB HDDs to equal the capacity of a single 122TB Solidigm SSD. After factoring in VAST’s data reduction techniques made possible by the superior performance of SSDs, the exabyte solution comprises 3,738 Solidigm SSDs vs over 40,000 high-capacity HDDs. The study found that the SSD-based VAST solution consumes 77% less storage energy.

Minimizing data center footprints

"We’re shipping 122-terabyte drives to some of the top OEMs and leading AI cloud service providers in the world," Corell says. "When you compare an all-122TB SSD to hybrid HDD + TLC SSD configuration, they're getting a nine-to-one savings in data center footprint. And yes, it’s important in these massive data centers that are building their own nuclear reactors and signing hefty power purchase agreements with renewable energy providers, but it’s increasingly important as you get to the regional data centers, the local data centers, and all the way out to your edge deployments where space can come at a premium."

That nine-to-one savings goes beyond space and power — it lets organizations fit infrastructure into previously unavailable spaces, expand GPU scale, or build smaller footprints.

"If you’re given X amount of land and Y amount of power, you’re going to use it. You’re AI" Corell explains, “where every watt and square inch counts, so why not use it in the most efficient way? Get the most efficient storage possible on the planet and enable greater GPU scale within that envelope that you have to fit in. On an ongoing basis, it’s going to save you operational cost as well. You have 90 percent fewer storage bays to maintain, and the cost associated with that is gone."

Another often-overlooked element, the (much) larger physical footprint of data stored on mechanical HDDs results in a greater construction materials footprint. Collectively, concrete and steel production accounts for over 15% of global greenhouse gas emissions. By reducing the physical footprint of storage, high-capacity SSDs can help reduce embodied concrete and steel-based emissions by more than 80% compared to HDDs. And in the last phase of the sustainability life cycle, which is drive end-of-life, there will be 90% percent fewer drives to disposition. .

Reshaping cold and archival storage strategies

The move to SDD isn't just a storage upgrade; it's a fundamental realignment of data infrastructure strategy in the AI era, and it's picking up speed.

"Big hyperscalers are looking to wring the most out of their existing infrastructure, doing unnatural acts, if you will, with HDDs like overprovisioning them to near 90% to try to wring out as many IOPS per terabyte as possible, but they’re beginning to come around," Corell says. "Once they turn to a modern all high-capacity storage infrastructure, the industry at large will be on that trajectory. Plus, we're starting to see these lessons learned on the value of modern storage in AI applied to other segments as well, such as big data analytics, HPC, and many more."

While all-flash solutions are being embraced almost universally, there will always be a place for HDDs, he adds. HDDs will persist in usages like archival, cold storage, and scenarios where pure cost per gigabyte concerns outweigh the need for real-time access. But as the token economy heats up and enterprises realize value in monetizing data, the warm and warming data segments will continue to grow.

Solving power challenges of the future

Now in its 4th generation, with more than 122 cumulative exabytes shipped to date, Solidigm’s QLC (Quad-Level Cell) technology has led the industry in balancing higher drive capacities with cost efficiency.

"We don’t think of storage as just storing bits and bytes. We think about how we can develop these amazing drives that are able to deliver benefits at a solution level," Corell says. "The shining star on that is our recently launched, E1.S, designed specifically for dense and efficient storage in direct attach storage configurations for the next-generation fanless GPU server."

The Solidigm D7-PS1010 E1.S is a breakthrough, the industry’s first eSSD with single-sided direct-to-chip liquid cooling technology. Solidigm worked with NVIDIA to address the dual challenges of heat management and cost efficiency, while delivering the high performance required for demanding AI workloads.

"We’re rapidly moving to an environment where all critical IT components will be direct-to-chip liquid-cooled on the direct attach side," he says. "I think the market needs to be looking at their approach to cooling, because power limitations, power challenges are not going to abate in my lifetime, at least. They need to be applying a neocloud mindset to how they’re architecting the most efficient infrastructure."

Increasingly complex inference is pushing against a memory wall, which makes storage architecture a front-line design challenge, not an afterthought. High-capacity SSDs, paired with liquid cooling and efficient design, are emerging as the only path to meet AI’s escalating demands. The mandate now is to build infrastructure not just for efficiency, but for storage that can efficiently scale as data grows. The organizations that realign storage now will be the ones able to scale AI tomorrow.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 数据中心 存储 SSD HDD Solidigm 能效 可持续性
相关文章