
The opening keynote at this week’s Pure Accelerate 2025 conference in Las Vegas made a bold claim: in the world of AI, disk storage is becoming obsolete.
In support of this, the Pure Storage CEO and Chairman Charles Giancarlo said that the design for Meta has certified Pure DirectFlash Modules (DFMs) as the storage medium of choice for the AI applications running in its next-generation data center. Meta plans to implement 75 TB DFMs to replace traditional SSDs and eliminate disk from its AI factory architecture.
“Meta chose Pure Storage DFMs as they reduce overall data center footprint and power requirements by 25% while providing much higher performance at lower cost,” said Giancarlo.
Unveiling of its FlashBlade//EXA product
He then announced the general availability of a flash product specifically aimed at high-end AI and High-Performance Computing (HPC). Known as FlashBlade//EXA, it is built for high concurrency and the vast amounts of metadata operations that are typical of large-scale AI and HPC workloads. It can deliver more than 10 terabytes per second read performance in a single namespace.
Giancarlo laid out some of the benefits of FlashBlade//EXA. The product:
- Scales data and metadata independently.
- Provides massive scale with off-the-shelf, third-party data nodes that enable multi-dimensional performance.
- Reduces complexity in deployment, management, and scaling through the use of standard protocols and networking.
“AI has disrupted the storage market, and legacy storage environments are unable to handle the massive parallelism required of AI and HPC,” said Matt Kimball, an analyst at consulting firm Moor Insights & Strategy. “With FlashBlade//EXA, Pure Storage is leveraging its decade of experience in unlocking the potential of metadata performance while abstracting the complexity associated with managing these environments.”
Keeping pace with AI and GPU capabilities
Traditional storage systems were not designed to meet modern AI requirements. When applied to large-scale AI and HPC, they face serious limitations due to requirements for parallel and concurrent reads and writes, metadata performance, ultra-low latency, asynchronous checkpointing, and predictable, high throughput. The storage platforms associated with AI engines and GPUs must be able to provide a parallel, disaggregated architecture to deliver flexibility at scale.
“It’s time to stop managing storage and start managing data,” said Giancarlo. “With AI increasing the potential value of enterprise data and cyber-threats imperiling it, data storage architectures and the tools for managing data have not kept pace.”
The previous generation of high-performance storage systems were optimized for traditional HPC environments with predictable and regular workloads. AI workloads are far more complex and multi-modal. They deal with huge quantities of text, images, and videos — all of it being processed simultaneously by tens of thousands of GPUs. In such an environment, the disk is much too slow. Even regular SSDs struggle to keep up.
FlashBlade//EXA was purpose-built for the challenges of AI workloads where the economics of GPU usage demand that they are highly utilized all the time. Therefore, they must be supported by the fastest possible storage systems. Pure Storage’s metadata engine and its Purity operating system have moved FlashBlade//EXA well ahead of the competition, according to Giancarlo.
In support of this, he cited Gartner’s latest Magic Quadrant for Primary Storage Platforms, which places Pure Storage as number one. To continue in that position, the company invests more than 20% of its revenue on research. While Meta is standardizing on 75 TB flash modules for its next-gen AI factories, Pure already offers 150 TB DFMs and 300 TB modules are due before the end of the year.
“We view data storage as high technology, not as a commodity,” said Giancarlo. “We are reinventing storage for AI and for the enterprise.”
While FlashBlade//EXA can scale to as many as 100,000 GPUs, that kind of magnitude only really applies to hyperscalers. Enterprise needs will typically be much more modest. Several other versions of its FlashBlade arrays are more suitable for enterprise deployments.
“Data is the fuel for enterprise vs factories, directly impacting performance and reliability of AI applications,” said Rob Davis, vice president, Storage Networking Technology, NVIDIA. “With NVIDIA networking, the FlashBlade//EXA platform enables organizations to leverage the full potential of AI technologies while maintaining data security, scalability, and performance for model training, fine tuning, and the latest agentic AI and reasoning inference requirements.”
Read our coverage of SAS Innovate 2025 to learn how enterprises are adopting generative and agentic AI to drive business transformation.