At AI Infrastructure Field Day 4, Xsight Labs presented their silicon story. And while the technology was impressive, what stuck with me was a bigger question: why don't network architects and engineers think more about what's inside their switches?
I came up as a network architect. We bought switches, not chips. We evaluated vendors, features, management platforms, support contracts. The silicon inside? That was abstracted away. Somebody else's problem.
For traditional enterprise networking, that worked fine. But AI infrastructure is changing the equation. The silicon inside your switch now determines power, flexibility, and how fast you can adapt when requirements change.
It's time to start asking different questions.
Why Silicon Matters Now
Three things are converging:
Power is becoming a constraint. Data centers are hitting ceilings. When one switch runs at 180 watts and another runs at 600 watts for similar bandwidth, that’s not just a spec difference—it’s rack density, cooling costs, and whether your facility can support the build.
Protocols are evolving faster than hardware cycles. Ultra Ethernet Consortium, new congestion management schemes, telemetry requirements. Fixed-pipeline silicon bakes in today’s assumptions. What happens when the assumptions change?
AI networking demands are different. Lossless, low latency, flexible instrumentation. The silicon determines what’s possible.
Xsight Labs as an Example
Xsight Labs is a fabless semiconductor company which means they design chips and contract manufacturing out. They showed two products: an Ethernet switch chip (X-series) and a DPU (E-series). I'm not a chip architect, but what caught my attention was what their approach means for network architects like me.
The headline is programmability. Most switch silicon uses fixed pipelines—packets go through predetermined processing stages, and that logic is baked in at the factory. If you want new features or protocol support, you're often waiting for the next hardware generation. That's a multi-year cycle.
Xsight's architecture is different. The forwarding logic can be updated through software without swapping hardware. When Ultra Ethernet Consortium finalizes a new spec, when your team needs custom telemetry, when congestion management schemes evolve—you're not waiting for a forklift upgrade. That's feature velocity, and for AI networking where requirements are still a moving target, it matters.
The Questions Enterprises Should Be Asking
When you’re evaluating switches for AI infrastructure—or increasingly, for any serious data center build—dig one layer deeper:
- What’s the power envelope per terabit?
- Is the forwarding pipeline fixed or programmable?
- Can the silicon adapt to protocol changes, or does that require a forklift?
- What’s the latency under real conditions?
This isn’t about abandoning your vendor relationships. It’s about understanding what you’re buying. The logo on the front matters less than the silicon inside.
The Bigger Picture
White box and ODM options are maturing and enterprises have choices now that didn’t exist five years ago.
You don’t need to become a chip expert. But as AI infrastructure pushes networking requirements in new directions, the architects who understand what silicon can and can’t do will make better decisions.
The chip inside your switch matters. Start asking about it.
Disclosure: I attended AI Infrastructure Field Day 4 as a delegate. My travel and accommodations were covered, but I am not compensated for my writing, and my opinions are my own.
FTC Disclosure Statement: https://www.linkedin.com/pulse/ftc-disclosure-statement-brad-gregory-pgjxc
LinkedIn: Brad Gregory, https://www.linkedin.com/in/brad-gregory/
Company Website: www.mcnstrategies.com
