The Shift to Programmable RDMA: Future-Proofing AI Scale-Out Networks

AI at scale demands programmable RDMA. Xsight’s XFA delivers 800G software‑defined transport with low latency and flexibility traditional NICs can’t match.

Guy Tzalik

17 Feb 2026

XFA blog

AI workloads are scaling at an unprecedented pace. Large GPU clusters, distributed training, and data-intensive inference pipelines are driving massive east-west traffic across the data center. As a result, the network has become a performance bottleneck, directly impacting job completion time, utilization, and overall system efficiency.

The next generation of AI infrastructure needs Remote Direct Memory Access (RDMA) that is not only fast, but programmable. Meeting these demands requires more than bandwidth. It requires ultra-low latency, minimal CPU overhead, and the ability to adapt transport behavior as workloads and network conditions evolve. This is where traditional RDMA architectures begin to show their limits.

The Limits of Fixed-Function RDMA NICs

Although fixed-function RDMA NICs meet current performance requirements, they were designed for a far more static world. Their rigid hardware architecture makes it difficult to evolve alongside:

  • Rapidly changing AI communication patterns
  • Dynamic congestion behavior in large-scale GPU fabrics
  • Emerging transport protocols and congestion control mechanisms

As AI networks grow and complexity, the inability to modify transport behavior becomes a fundamental constraint.

The next generation of AI infrastructure needs RDMA that is not only fast - but programmable.

A Software-Defined Approach to RDMA

Xsight Fabric Adapter (XFA) takes a fundamentally different approach to RDMA design.

Rather than relying on fixed-function transport logic, XFA implements RDMA in software, running on programmable compute cores within the Xsight E1 AIC - a platform optimized for modern GPU clusters.

This architecture enables:

  • Full control over the RDMA transport layer
  • Rapid adoption of new protocols and extensions
  • Programmable congestion control and flow management
  • Deep telemetry and real-time fabric visibility

Most importantly, it allows RDMA behavior to evolve at the pace of AI workloads - without waiting for new silicon revisions.

RoCEv2 as a Software-Defined Transport

As part of the XFA solution, Xsight Labs supports a fully software-defined RoCEv2 implementation, executed entirely on the E1’s programmable compute cores.

This approach removes reliance on fixed-function RTL while preserving RDMA semantics, performance, and interoperability. It also creates a foundation for supporting additional transports and enhancements over time - including emerging AI-focused RDMA protocols.

Software-defined transport is no longer a compromise. It is a strategic advantage.

Performance Without Compromise

A common concern with software-defined networking is performance. XFA directly addresses this by combining high-performance programmable cores with scalable architecture designed for line-rate operation.

In practice, this architecture has demonstrated the ability to drive 800G of sustained RDMA throughput, while consuming only a portion of the available compute resources on the E1. The remaining headroom can be used for additional offloads (e.g. KV cache offload), telemetry processing, or advanced fabric services.

These results validate that software-defined RDMA can deliver both flexibility and line-rate performance - a requirement for modern AI scale-out networks. (Performance results were demonstrated as part of Xsight’s E1 showcase.)

Looking Forward

The future of AI infrastructure demands networking platforms that are as programmable as the compute they connect. Xsight’s software-defined RDMA approach rethinks the role of the NIC - transforming it from a static endpoint into a flexible, programmable fabric element.

The result is an RDMA solution designed not just to meet today’s performance requirements, but to evolve with the next generation of AI scale-out workloads.

Ready to go deeper?

Our engineers built these solutions to solve real infrastructure challenges. See how they apply to your environment.