Nexthop AI Unveils Architecture to Slash AI Data Center Costs

📊 Key Data
  • 30% lower costs and 30% lower power consumption with Disaggregated Spine architecture vs. traditional chassis-based systems
  • 15-20% power savings with NH-4010 switch, translating to tens of megawatts in large AI clusters
  • $200 billion market forecast for AI Ethernet switching over the next decade
🎯 Expert Consensus

Experts view Nexthop AI's Disaggregated Spine architecture as a transformative solution for reducing AI data center costs and energy consumption, with strong industry backing from hyperscalers and open networking advocates.

2 days ago
Nexthop AI Unveils Architecture to Slash AI Data Center Costs

Nexthop AI Unveils Architecture to Slash AI Data Center Costs

SANTA CLARA, CA – March 10, 2026 – In a move poised to reshape the economics of artificial intelligence infrastructure, Nexthop AI today launched a new portfolio of high-performance switches and unveiled its groundbreaking Disaggregated Spine architecture. The announcement positions the two-year-old startup, founded by former Arista Networks COO Anshul Sadana and recently valued at $4.2 billion, as a formidable force aiming to solve the critical networking bottlenecks and soaring energy costs that threaten to slow the AI revolution.

Co-developed with a major hyperscaler, the new architecture promises to deliver significant cost and power savings, addressing the most pressing challenges faced by large-scale data center operators. The company's strategy combines hardware innovation with a deep commitment to open-source networking, providing flexibility and efficiency for both established cloud giants and a new generation of specialized AI cloud providers.

The Disaggregated Future of AI Data Centers

At the heart of the announcement is the Disaggregated Spine architecture, a novel design that dismantles the traditional, monolithic chassis-based systems that have long dominated data center networks. These legacy systems typically lock customers into a single vendor's proprietary hardware and software. In contrast, Nexthop AI’s architecture decomposes the network into independent, optimized functional tiers: a scale-across leaf tier facing the internal data center fabric and a scale-across spine tier for data center interconnects.

This disaggregation offers substantial benefits. The company claims the architecture achieves 30% lower costs and 30% lower power consumption compared to its chassis-based predecessors. By separating hardware from the network operating system (NOS), it empowers customers to adopt open-source software like SONiC (Software for Open Networking in the Cloud), a project to which Nexthop is a top global contributor. This grants hyperscalers unprecedented control and frees them from vendor lock-in.

“We are delighted to see the tremendous contributions Nexthop has made to the open networking community in such a short time,” said Dave Maltz, Principal Network Architect for Azure Networking at Microsoft, in a statement that strongly suggests Microsoft is the key hyperscaler partner behind the architecture. “They are partnering with the community on several new initiatives, including pioneering new concepts like the Disaggregated Spine.”

This approach is critical for handling the unique traffic patterns of AI workloads, which involve massive, parallel data flows between thousands of GPUs. The architecture's deep buffers, line-rate MACsec encryption, and expanded routing tables are all engineered to manage these intense demands without creating performance bottlenecks.

Powering the AI Revolution with Unprecedented Efficiency

The immense energy consumption of AI clusters has become a critical industry-wide concern, with power availability now a primary constraint on data center expansion. Nexthop AI is tackling this challenge head-on with a new product portfolio built for extreme efficiency.

The lineup includes the NH-4010, which it calls the industry’s lowest power 51.2 Tbps switch. Based on Broadcom's Tomahawk 5 silicon, this switch is proven to save customers 15-20% in power consumption in direct comparisons, which could translate to tens of megawatts in total power savings at the scale of a modern AI cluster. Also announced was the NH-4220, the highest density 102.4 Tbps air-cooled system based on Broadcom's next-generation Tomahawk 6 silicon, designed for seamless upgrades.

These efficiency claims are strongly supported by the underlying technology. Broadcom’s Tomahawk 5 chip is widely recognized for its low power profile, consuming less than a watt per 100Gb of capacity. This close collaboration with the silicon vendor lends significant credibility to Nexthop's performance and efficiency metrics.

“We are pleased to collaborate with Nexthop on SONiC and SAI-based architectures to deliver standards-based Ethernet switching solutions,” said Asad Khamisy, a senior vice president at Broadcom. He noted that Nexthop’s “deep system expertise and integration of our low-power switching silicon have enabled scalable, highly power-efficient solutions.”

Championing Open Networking for Hyperscalers and NeoClouds

Beyond hardware, Nexthop AI’s strategy is deeply rooted in the open networking movement. As a governing board member of the Linux Foundation's SONiC project, the company empowers its customers to run their preferred network operating system, whether it's a customized internal version of SONiC or another open platform like FBOSS.

This flexibility is particularly appealing to a growing market segment the company calls “NeoClouds.” These are a new breed of AI-first cloud providers that offer specialized GPU-as-a-Service (GPUaaS) infrastructure. Unlike general-purpose hyperscalers, NeoClouds build their entire stack for the express purpose of running high-performance AI workloads. They require networking that is not only fast and efficient but also customizable and easy to deploy.

For this segment, Nexthop offers its hardware as a turnkey solution, fully integrated with a hardened and supported version of its own network operating system powered by SONiC. This allows NeoClouds to deploy world-class networking infrastructure without the extensive in-house engineering resources of a major hyperscaler, enabling them to focus on their core AI service offerings.

Tapping into a Multi-Billion Dollar Market

Nexthop AI is entering a market experiencing explosive growth. Citing research from 650 Group, the company highlights a forecast that the Ethernet switching market for AI networking is poised to approach $200 billion over the next decade. While specific figures from market analysts vary, a broad industry consensus points toward unprecedented demand for AI-related infrastructure.

Projections from firms like IDC and Precedence Research estimate the broader AI infrastructure market will climb into the hundreds of billions of dollars by the end of the decade, driven by the relentless expansion of AI data centers and the need for high-bandwidth, low-latency fabrics to connect vast GPU clusters.

“Ethernet Switching is a key building block for AI Networking,” noted Alan Weckel, founder and technology analyst at 650 Group. “Nexthop AI is taking a unique co-development approach to product development and their initial platforms represent the start of a foundational portfolio that raises the bar to fundamentally address the efficiency, density, and reliability challenges to support 800G and 1.6T deployments.”

By combining a co-development model with a focus on power efficiency and a commitment to open standards, Nexthop AI is strategically positioned to capture a significant share of this burgeoning market, providing the critical infrastructure needed to power the next wave of artificial intelligence.

Sector: Software & SaaS AI & Machine Learning Venture Capital
Theme: Artificial Intelligence Generative AI Industry 4.0
Event: Acquisition
Product: ChatGPT
Metric: Revenue EBITDA

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 20513