SambaNova, Intel Forge Alliance to Challenge NVIDIA's AI Dominance

📊 Key Data
  • $350M Funding Round: SambaNova secures over $350 million in Series E funding to scale SN50 production and expand SambaCloud.
  • SN50 Performance: The SN50 chip achieves 895 tokens per second per user with Meta's Llama 3.3 70B model, 5x faster than NVIDIA's B200 GPU.
  • Cost Efficiency: SambaNova claims a 3X lower total cost of ownership (TCO) with the SN50, operating within existing data center power envelopes using standard air cooling.
🎯 Expert Consensus

Experts view SambaNova's SN50 chip and alliance with Intel as a significant challenge to NVIDIA's AI dominance, offering superior performance, cost efficiency, and strategic alternatives for enterprise AI deployment.

about 2 months ago
SambaNova, Intel Forge Alliance to Challenge NVIDIA's AI Dominance

SambaNova, Intel Forge Alliance to Challenge NVIDIA's AI Dominance

DUBAI, United Arab Emirates – February 26, 2026 – The high-stakes battle for the future of artificial intelligence infrastructure has intensified, as SambaNova Systems today unveiled a trio of strategic moves aimed at disrupting the market. The company announced its next-generation SN50 AI chip, a multi-year collaboration with industry giant Intel, and a massive new funding round of over $350 million, signaling a direct challenge to NVIDIA's long-standing dominance.

The announcement positions SambaNova not just as a hardware innovator, but as the architect of a comprehensive, cost-effective alternative for enterprises racing to deploy the next wave of autonomous AI agents.

“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co‑founder and CEO of SambaNova, in a statement. “The real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”

The New Frontier: Hardware for Agentic AI

At the heart of the announcement is the SN50, a chip purpose-built for what the industry calls 'agentic AI.' These are not the passive, prompt-driven models of the past, but autonomous systems capable of perception, reasoning, and executing multi-step tasks to achieve goals. From automating complex financial analysis to managing entire IT security systems, agentic AI workloads demand ultra-low latency and the ability to process immense context—requirements that strain traditional, GPU-based infrastructures.

SambaNova claims the SN50 delivers a decisive edge in this new arena. Citing benchmarks from industry analyst firm SemiAnalysis, the company asserts its new chip boasts a maximum speed five times faster than competitive chips for specific agentic AI tasks. For instance, when running Meta's Llama 3.3 70B model, the SN50 reportedly achieves 895 tokens per second per user, a significant leap over the 184 tokens per second attributed to NVIDIA's B200 GPU under similar conditions.

This performance leap is enabled by SambaNova’s unique Reconfigurable Data Unit (RDU) architecture. The SN50 features a sophisticated three-tier memory system (SRAM, HBM, DDR5) that allows it to hold multiple large models in memory simultaneously, drastically reducing the time it takes to get the first piece of information back to the user—a critical metric known as time-to-first-token. The system can link up to 256 accelerators, providing massive scale and concurrency for thousands of simultaneous AI sessions. Perhaps most critically for enterprise adoption, SambaNova claims this performance comes with a 3X lower total cost of ownership (TCO) and operates within existing data center power envelopes using standard air cooling, avoiding the costly transition to liquid cooling required by some high-performance alternatives.

“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale,” noted Peter Rutten, Research Vice-President at analyst firm IDC. “By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game.”

A Strategic Alliance to Build a GPU Alternative

While innovative hardware is crucial, a robust ecosystem is paramount for market adoption. To this end, SambaNova has entered into a planned multi-year strategic collaboration with Intel. The partnership aims to deliver a powerful, cost-efficient alternative to the GPU-centric solutions that currently dominate the AI cloud.

As part of the deal, Intel Capital is making a strategic investment in SambaNova. The collaboration will focus on integrating SambaNova's specialized systems with Intel's vast portfolio of CPUs, networking, and storage technologies to create a new blueprint for heterogeneous AI data centers. The partnership will span three key areas: scaling SambaNova’s cloud built on Intel Xeon infrastructure, co-engineering integrated AI systems, and executing a joint go-to-market strategy through Intel's massive global sales channels.

For Intel, the move represents a shrewd strategy to fortify its position in the burgeoning AI inference market. By partnering with a leading accelerator specialist, Intel can offer customers a compelling, optimized stack for production AI without having to solely rely on its own GPU roadmap to compete with NVIDIA.

“Customers are asking for more choice and more efficient ways to scale AI,” said Kevork Kechichian, an Executive Vice President at Intel. “By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”

Fueling Global Expansion and Sovereign AI

The ambition of this new alliance is backed by significant financial firepower. SambaNova’s oversubscribed Series E round of over $350 million was led by Vista Equity Partners and Cambium Capital, with strong participation from Intel Capital and a host of new and existing investors from around the globe, including entities from the Middle East.

The influx of capital will be used to scale SN50 production and expand SambaCloud, but its strategic importance lies in fueling a global push, particularly in the burgeoning field of 'sovereign AI.' This trend sees nations and large corporations seeking to build and control their own AI infrastructure to ensure data privacy, security, and strategic autonomy.

Underscoring this trend, SoftBank Corp. was announced as the first customer to deploy the SN50 in its next-generation AI data centers in Japan. The deployment will serve as the backbone for low-latency inference services for sovereign and enterprise clients across the Asia-Pacific region.

“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank,” said Hironobu Tamba, a Vice President at SoftBank Corp. “We gain the ability to deliver world-class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control.”

This focus on sovereign capabilities is echoed by new investors like Saudi Arabia's First Data, whose chairman, Mr. Sharaf Al Hariri, stated that the investment is a core part of a strategy to bring advanced, sovereign-ready AI to the Kingdom and the wider Middle East. This global approach signals a fundamental shift in the market, as noted by Landon Downs, co-founder of investor Cambium Capital: “AI is moving from a software story to an infrastructure story.” With its new chip, powerful new ally, and a formidable war chest, SambaNova is betting it can write the next chapter of that story.

Theme: Geopolitics & Trade Digital Transformation Agentic AI Generative AI Large Language Models Artificial Intelligence
Product: AI & Software Platforms Hardware & Semiconductors
Sector: AI & Machine Learning Fintech Cloud & Infrastructure Software & SaaS
Event: Partnership Corporate Finance
Metric: Revenue
UAID: 18374