Flattening the AI Data Path for Maximum Efficiency:

How Hammerspace and Xsight Labs eliminate legacy x86 storage servers to deliver the high-density, energy-efficient Open Flash Platform architecture.

Summary

Launched in 2025, the Open Flash Platform (OFP) reimagines shared storage to meet the high energy and cost demands of AI. By replacing complex, two-tiered x86 server designs with a single, flat architecture, Hammerspace and Xsight Labs have eliminated traditional bottlenecks delivering a more scalable and power-efficient storage foundation.

Company

Hammerspace
Redwood City, CA
Software-Defined Storage

Solution

The solution leverages the Xsight E1 DPU to redefine data center storage by removing the barrier of traditional storage servers.

Results

  • 10X or greater increase in density by optimizing the form factor with the Open Flash Platform.
  • Up to 90% decrease in power by eliminating excess resources from the storage server data path.
  • Reduced TCO by 2/3rds due to significantly lower maintenance requirements.
  • 60% longer operating life by eliminating adherence to traditional x86 server lifecycles.

The challenge

The scaling of data centers to meet the needs of AI workloads is putting tremendous pressure on all categories of the infrastructure stack and resources. The sheer volume and complexity of AI clusters are raising power demand, tightening capacity, and inflating overall costs to unsustainable levels.

The Open Flash Platform (OFP) initiative began in 2025 as a fundamental rethink of the architecture for shared storage to create the most efficient approach to the data demands of modern data centers. The initiative considers their design from a collaborative, open-standards perspective that isn’t limited to a single storage software vendor, media type, or hardware manufacturer.

Modern shared file system architectures, even those using JBOFs (Just a Bunch of Flash), still rely on a two-tiered server design. This creates unnecessary complexity and inefficiency, and locks data away in a proprietary storage systems. Controller servers, layers of storage enclosures, and internal networks introduce latency, bottlenecks, unproductive power draw, needless points of failure, convoluted licensing, and excessive complication.

The Open Flash Platform collapses the two-tiered storage model into a single, simple layer, eliminating the controller server entirely. Instead, client servers have a direct path to the shared storage media leveraging standard Linux and NFS on an extremely efficient hardware platform. The result is a flat, scalable architecture that is dramatically simpler to deploy, power, cool and manage.

The solution

Leveraging the Open Flash Platform design, Hammerspace and Xsight have engineered a solution that redefines data center storage by removing the barrier of the traditional storage server and creating a radically simplified hardware platform.

The Insight E1 DPU, with its high ARM core density and memory bandwidth, and 800 Gbps ethernet networking is the ideal chip for today’s OFP designs, especially relative to more power hungry x86 processors. By stacking eight U.2 QLC SSDs into each one rack unit sled, an OFP tray can hold five sleds in a standard rack, or sleds in an OCP rack, delivering large capacity scalable units serving NFS directly to AI computing clusters, with orchestration outside the data path by the Hammerspace Data Platform.

This flattened architecture will give us a path to an Exabyte of storage per rack, connected directly to computing via high-speed ethernet, eliminating the latency and cost associated with traversing multiple hops of the current storage server model.

The results

The convergence of data creation associated with emerging AI applications coupled with limitations around power availability, hot data centers, and data center space constraints means we need to take a blank-slate approach to building AI data infrastructure.

10X or greater increase in density

by optimizing the form factor with OFP

60% Longer Operating Life

by eliminating the adherence to x86 server lifecycles

Up to 90% decrease in power

by eliminating the excess resources from the storage server data path

Reduce TCO by 2/3rds

due to lower maintenance requirements.

The Xsight DPU delivers the chip level performance that makes this architecture work.With a balanced data path, AI clusters can access data quickly, without the excessive overhead previously required.

Legacy storage is collapsing under the weight of AI, by fusing our data platform with Xsight’s 800G DPU, we flatten the data path and turn any Linux system into shared storage. The result is an open architecture that Internally scales linearly with performance, slashes cost, and feeds GPUs at the speed AI demands.

David Flynn

CEO of Hammerspace

Ready to start your journey?

Let's discuss how Xsight can address your infrastructure needs.