Rearchitecting the Foundation of AI and Cloud Infrastructure
Rearchitecting the Foundation of AI and Cloud Infrastructure
Solutions Powered by Xsight Labs
Solutions Powered by Xsight Labs
“The Network is the Computer” is a slogan coined by Sun Microsystems in 1984. 40 years later, this statement not only holds true but is even more significant as we enter the new era of distributed computing. At Xsight Labs, our vision is to enable a democratized ecosystem for high-bandwidth, low-latency scalable networks by delivering a broad portfolio of end-to-end connectivity solutions for Cloud and AI infrastructure. Xsight Labs is leading the charge, providing cloud service providers and Hyperscalers with substantial bandwidth expansion while reducing power consumption.
Xsight Labs’ X-Switch family of programmable switches delivers the right blend of performance and versatility to power the most demanding requirements of front-end compute, storage, and back-end AI networks at the lowest TCO.
“The Network is the Computer” is a slogan coined by Sun Microsystems in 1984. 40 years later, this statement not only holds true but is even more significant as we enter the new era of distributed computing. At Xsight Labs, our vision is to enable a democratized ecosystem for high-bandwidth, low-latency scalable networks by delivering a broad portfolio of end-to-end connectivity solutions for Cloud and AI infrastructure. Xsight Labs is leading the charge, providing cloud service providers and Hyperscalers with substantial bandwidth expansion while reducing power consumption.
Xsight Labs’ X-Switch family of programmable switches delivers the right blend of performance and versatility to power the most demanding requirements of front-end compute, storage, and back-end AI networks at the lowest TCO.
Xsight Softens the DPU and abstract
Powering SmartNICs, the data processing unit (DPU) has become nearly ubiquitous in the leading public clouds. Existing designs maximize power efficiency for a constrained feature set, and they require proprietary software tools. Xsight Labs aims to break this paradigm with its new
E1 DPU, which promises the openness of an Arm® server CPU. Xsight Labs sponsored the creation of this white paper, but the opinions and analysis are solely those of the author.
X2: An Optimized ToR Switch for 100G SerDes Generation
There is a constant need to refresh the front-end networks to service the applications, compute, and storage requirements of end customers. The largest Cloud Service providers now have front-end networks with millions of servers deployed, and these need to be constantly refreshed and continue to grow. There is an insatiable need to drive towards higher bandwidth and lower power. The new generation of fabric switches (51.2T), 800G optics, and Network Interface Cards (or Network adapters) all support high-speed 100G PAM4 links. The industry will benefit from a new generation of Top-of-Rack (ToR) switches for the 100G SerDes era.
Enter the X2 Switch
With 128 x 100G PAM4 SerDes, X2 is an ideal solution for the transition to 100G SerDes era. Supporting a full-line rate of 12.8Tbps at 200W, the X2 facilitates a 40% power savings over incumbent 16nm, 50G SerDes-based ToR switches, which translates to tens of millions of dollars in OpEx savings per year.
X2: An Optimized ToR Switch for 100G SerDes Generation
There is a constant need to refresh the front-end networks to service the applications, compute, and storage requirements of end customers. The largest Cloud Service providers now have front-end networks with millions of servers deployed, and these need to be constantly refreshed and continue to grow. There is an insatiable need to drive towards higher bandwidth and lower power. The new generation of fabric switches (51.2T), 800G optics, and Network Interface Cards (or Network adapters) all support high-speed 100G PAM4 links. The industry will benefit from a new generation of Top-of-Rack (ToR) switches for the 100G SerDes era.
Enter the X2 Switch
With 128 x 100G PAM4 SerDes, X2 is an ideal solution for the transition to 100G SerDes era. Supporting a full-line rate of 12.8Tbps at 200W, the X2 facilitates a 40% power savings over incumbent 16nm, 50G SerDes-based ToR switches, which translates to tens of millions of dollars in OpEx savings per year.
Fabric Switch for AI at the Edge
Fabric Switch for AI at the Edge
Not all AI clusters are built the same. Today’s clusters require upwards of 32,000 GPUs to train the largest models like GPT4 or LLAMA-3. Additionally, many cloud service providers are looking at bringing AI capabilities closer to their end customers by providing AI at the Edge. These Edge clusters tend to be much smaller and are also constrained in real estate and power. With native support for features like end-to-end congestion-aware routing, dynamic load balancing, incast tolerance, fast link recovery and a broad set of telemetry tools, X2 delivers all the key features needed to support the high-efficiency lossless networks needed for AI clusters. X2 also supports the evolving transport layer requirements of the Ultra Ethernet Consortium (UEC). Delivering these features in a small form factor that can be air-cooled makes X2 an ideal fabric switch for AI at the edge.
Programmable Switch for Edge and Enterprise Cloud
Programmable Switch for Edge and Enterprise Cloud
Data center traffic is growing at an exponential rate, driven by the movement of enterprise infrastructure to the cloud. Rapidly evolving technologies like AI, blockchain, IoT, and 5G, to name a few, have resulted in an ocean of data needing to be processed and acted upon. Consolidation of compute and data resources is creating an urgent need for fast, flexible, and more reliable transparent networks.
The application-optimized programmable architecture of the X-Switch delivers the right mix of bandwidth, scale, and feature richness needed for enterprise cloud deployments. X2 switch enables Enterprise Cloud customers to operate their networks at the highest efficiency by delivering features that enable seamless extensibility of compute and storage resources across on-prem to cloud deployments.
Xsight Softens the DPU and abstract
Xsight Softens the DPU and abstract
Powering SmartNICs, the data processing unit (DPU) has become nearly ubiquitous in the leading public clouds. Existing designs maximize power efficiency for a constrained feature set, and they require proprietary software tools. Xsight Labs aims to break this paradigm with its new E1 DPU, which promises the openness of an Arm® server CPU. Xsight Labs sponsored the creation of this white paper, but the opinions and analysis are solely those of the author.
X2: An Optimized ToR Switch for 100G SerDes Generation
X2: An Optimized ToR Switch for 100G SerDes Generation
There is a constant need to refresh the front-end networks to service the applications, compute, and storage requirements of end customers. The largest Cloud Service Providers now have front-end networks with millions of servers deployed, and these need to be constantly refreshed and continue to grow. There is an insatiable need to drive towards higher bandwidth and lower power. The new generation of fabric switches (51.2T), 800G optics and Network Interface Cards (or Network adapters) all support high speed 100G PAM4 links. The industry will benefit from a new generation of Top-of-Rack (ToR) switches for the 100G SerDes era.
Enter the X2 Switch
With 128 x 100G PAM4 SerDes, X2 is an ideal solution for the transition to 100G SerDes era. Supporting full-line rate of 12.8Tbps at 200W, the X2 facilitates a 40% power savings over incumbent 16nm, 50G SerDes-based ToR switches, which translates to tens of millions of dollars in OpEx savings per year.
Data center traffic is growing at an exponential rate, driven by the movement of enterprise infrastructure to the cloud. Rapidly evolving technologies like AI, blockchain, IoT and 5G, to name a few, have resulted in an ocean of data needing to be processed and acted upon. Consolidation of compute and data resources are creating an urgent need for fast, flexible and more reliable transparent networks.
Fabric Switch for AI at the Edge
Fabric Switch for AI at the Edge
Not all AI clusters are built the same. Today’s clusters require upwards of 32,000 GPUs to train the largest models like GPT4 or LLAMA-3. Additionally, many cloud service providers are looking at bringing AI capabilities closer to their end customers by providing AI at the Edge. These Edge clusters tend to be much smaller and are also constrained in real estate and power.
With native support for features like end-to-end congestion aware routing, dynamic load balancing, incast tolerance, fast link recovery and a broad set of telemetry tools, X2 delivers all the key features needed to support the high-efficiency lossless networks needed for AI clusters. X2 also supports the evolving transport layer requirements of the Ultra Ethernet Consortium (UEC). Delivering these features in a small form factor that can be air cooled makes X2 an ideal fabric switch for AI at the edge.
Programmable Switch for Edge and Enterprise Cloud
Data center traffic is growing at an exponential rate, driven by the movement of enterprise infrastructure to the cloud. Rapidly evolving technologies like AI, blockchain, IoT and 5G, to name a few, have resulted in an ocean of data needing to be processed and acted upon. Consolidation of compute and data resources are creating an urgent need for fast, flexible and more reliable transparent networks.
The application-optimized programmable architecture of the X-Switch delivers the right mix of bandwidth, scale and feature richness needed for enterprise cloud deployments. X2 switch enables Enterprise Cloud customers to operate their networks at the highest efficiency by delivering features that enable seamless extensibility of compute and storage resources across on-prem to cloud deployments.