Salience Labs' Optical Switch Aims to Rewire AI's High-Energy Future
- 8x reduction in power consumption compared to current solutions
- 80% increase in 'Tokens per Second / User' for AI workloads
- Up to 100 nanoseconds per hop latency with all-optical switching
Experts agree that Salience Labs' all-optical switch represents a transformative leap in AI networking, offering unprecedented efficiency gains in power consumption, latency, and scalability, making it a critical solution for the future of AI infrastructure.
Salience Labs Ignites AI Networking with All-Optical Switch
OXFORD, England – March 10, 2026 – As the artificial intelligence boom strains global energy grids and creates unprecedented data traffic jams, Oxford-based Salience Labs has launched a new weapon in the fight for efficiency: the industry’s highest performing 32-port all-optical switch. The technology promises to rewire the foundational networking layer of AI datacenters, drastically cutting latency and power consumption to unlock the next wave of AI performance.
The announcement comes at a critical juncture for the tech industry. AI models are growing exponentially in complexity, demanding massive clusters of GPUs that must communicate seamlessly and instantly. This has created a severe "I/O bottleneck," where the speed of moving data, not raw computing power, is the primary limiting factor. Traditional electronic packet switches, the long-standing traffic cops of datacenter networks, are struggling to keep pace, consuming vast amounts of power and introducing costly delays.
A Fundamental Leap to Light-Speed
Salience Labs’ solution represents a fundamental architectural shift. Instead of converting optical signals to electronic ones for routing and then back to optical for transmission—a process that burns energy and time—its all-optical circuit switch (OCS) routes data entirely in the photonic domain. This approach keeps data as light from end to end within the switch.
“Optical switching is moving networks from electronic packet routing to highly predictable, energy-efficient optical connectivity,” said Vaysh Kewada, CEO and co-founder of Salience Labs, in a statement. “We are transforming the networking layer, unlocking the ability to extend scale-up and scale-out networks across the datacenter.”
The company claims its silicon photonic switch delivers dramatic performance gains. By eliminating the need for optical transceivers within the switching fabric, it can achieve up to an 8x reduction in power consumption compared to current solutions. For AI workloads, this translates to a critical improvement in user experience, with Salience Labs reporting up to an 80% increase in "Tokens per Second / User"—a key metric measuring the responsiveness and throughput of large language models.
Industry analysis supports the potential of this technology. OCS platforms are known to reduce latency by a factor of 10 to 100, achieving delays as low as 100 nanoseconds per hop. This near-instantaneous communication is vital for synchronizing the thousands of GPUs required for training and running state-of-the-art AI. The company's technology, which supports the latest 200G data rates, is designed to be "effectively near-lossless at the system level," preserving the signal integrity essential for high-performance computing.
Tackling AI’s Voracious Energy Appetite
The performance benefits are matched by a crucial environmental and economic imperative. The global datacenter industry is facing an energy crisis. According to projections from the U.S. Energy Information Administration, data centers could consume as much as 6.6% of the country's total electricity by 2028, a staggering increase from today's levels, largely driven by the demands of AI.
In this context, technologies that slash power consumption are no longer a luxury but a necessity for sustainable growth. Salience Labs’ optical switch directly addresses this challenge. The significant power savings not only reduce a datacenter's carbon footprint but also translate directly into lower operational expenditures (OPEX) for companies grappling with soaring energy bills. Furthermore, by removing entire layers of electronic components and their associated transceivers, the architecture can also lower upfront system costs (CAPEX) and simplify datacenter design.
This push for efficiency is a sector-wide trend. Lumentum, a competitor in the optical space, has projected that energy-efficient optical interfaces and OCS could cut the power required for training a next-generation model like GPT-5 down to the levels of its predecessor, GPT-4. This underscores a critical reality: without radical improvements in network efficiency, the financial and environmental cost of future AI development could become prohibitive.
A Calculated Play in a High-Stakes Market
Salience Labs, founded in 2021 based on research from the University of Oxford and the University of Münster, is entering a dynamic and increasingly crowded market. The industry's giants are all racing to solve the AI connectivity puzzle. Nvidia recently showcased new silicon photonics switches using co-packaged optics (CPO), while established players like Cisco and emerging specialists such as iPronics, Coherent, and Ayar Labs are all developing their own optical networking solutions.
This competitive frenzy validates the market’s direction. Dell’Oro Group forecasts that spending on switches for AI back-end networks will explode to over $100 billion by 2030. The question is no longer if optical switching will dominate AI infrastructure, but who will provide the most scalable and interoperable solutions.
To navigate this landscape, Salience Labs is leaning heavily on a strategy of collaboration. The company has forged key partnerships to ensure its technology can be manufactured at scale and deployed with confidence. A critical alliance with Tower Semiconductor leverages the foundry's high-volume silicon photonics platforms to move the optical switches from development into mass production.
“Our partnership with Salience to develop advanced photonic integrated circuits (PIC)-based optical OCS for AI infrastructure... is set to support customers in confidently scaling from development to high-volume production,” noted Dr. Ed Preisler, Vice President and General Manager of the RF Business Unit at Tower Semiconductor.
To ensure performance and interoperability, Salience Labs is working with Keysight Technologies, using its AI Data Center Builder to test and validate the switch's capabilities. “Through our collaboration with Salience Labs, we are showcasing an optical circuit switch test... that demonstrates how these innovations can improve bandwidth efficiency and reduce latency for AI workloads,” said Ram Periakaruppan, a Vice President at Keysight.
Charting the Path to an All-Optical Future
The 32-port switch is just the first step in an ambitious roadmap. Salience Labs has already announced plans for 64- and 128-port versions to meet the relentless growth in AI cluster sizes, which are projected to quadruple every two years. This trajectory aligns with the industry's push towards 1.6 Tbps optical components, expected to ramp up in 2026, and the eventual need for 3.2T technologies for clusters exceeding 100,000 GPUs.
The ultimate vision is to enable a fully unified compute fabric, where vast pools of GPUs can be dynamically interconnected with minimal latency, operating as a single, colossal processor. This would tear down the remaining communication walls within datacenters, paving the way for the creation of even larger and more powerful AI models.
By focusing on a solution that is compatible with both today's pluggable optics and tomorrow's co-packaged architectures, Salience Labs is positioning itself as a pivotal enabler of this transition. The company's technology is not merely an incremental improvement but a foundational piece of the puzzle for building the next generation of AI infrastructure, where the speed of light is the new standard for data.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →