STL Aims to Rewire US Data Centers for the AI Era with Neuralis
- $35 billion: U.S. AI data center market value in 2025
- $167 billion: Projected market value by 2033
- 90%: Portion of data flow in AI clusters attributed to East-West traffic
Experts agree that Neuralis addresses critical bottlenecks in AI data center connectivity, offering a scalable, high-density solution essential for next-generation computing infrastructure.
STL Aims to Rewire US Data Centers for the AI Era with Neuralis
WASHINGTON, D.C. β April 20, 2026 β As the artificial intelligence revolution accelerates, the physical infrastructure underpinning it is straining at the seams. In response to this mounting pressure, connectivity solutions provider STL today unveiled Neuralis, a flagship suite of data center products aimed directly at the heart of the AI challenge, during the Data Center World 2026 conference. The launch by its U.S. subsidiary, STL Optical Connectivity NA (STLOC), signals a significant move to equip American data centers with the high-speed, high-density "nervous system" required for the next generation of computing.
The U.S. AI data center market, already valued at over $35 billion in 2025, is projected to surge to more than $167 billion by 2033, a testament to the colossal investment in AI capabilities. However, this explosive growth is creating unprecedented bottlenecks, not just in power and cooling, but in the fundamental connectivity that allows AI models to function.
The AI Data Center Bottleneck
Modern AI workloads have fundamentally rewritten the rules of data center traffic. Unlike traditional applications with predictable "North-South" traffic flowing between users and servers, AI training and inference rely on massive, continuous communication between thousands of specialized processors (GPUs) within the data center. This "East-West" traffic, which can account for up to 90% of all data flow in an AI cluster, demands ultra-high bandwidth and near-instantaneous, low-latency connections.
When the network fabric cannot keep pace, these incredibly expensive GPUs sit idle, wasting energy and computational potential. Industry analysts have consistently pointed to interconnectivity as a critical limiting factor in scaling AI supercomputers. Furthermore, the sheer density of AI hardware creates immense physical challenges. Racks are now exceeding 100 kilowatts in power consumption, generating intense heat and requiring every square inch of floor space to be optimized. Cramming more processing power into existing footprints means the cabling that connects it all must become exponentially denser and more efficient.
Traditional cabling infrastructure, designed for a different era of computing, simply cannot meet these demands. The risk of installation errors, lengthy deployment times, and performance degradation under thermal stress are major hurdles for hyperscalers and cloud providers racing to build out their AI capacity.
Neuralis: A Nervous System for AI Supercomputers
STL's Neuralis portfolio is engineered to address these specific pain points, drawing its name from the intricate pathways of a biological neural network. The suite is built on two core pillars designed to move complexity out of the chaotic data hall and into a controlled factory environment.
The first pillar, Maximizing the AI Whitespace, focuses on ultra-high-density cabling for GPU clusters. By utilizing factory-terminated and tested assemblies with advanced MPO and MMC connectors, Neuralis aims to slash on-site labor and deployment timelines. This "plug-and-play" approach not only accelerates build-outs but also significantly reduces the risk of human error during complex fiber installations, ensuring higher reliability from day one.
The second pillar is High-Speed Data Center Interconnect (DCI), engineered for connecting entire buildings across a data center campus. The flagship of this pillar is the Celesta IBR series of cables. These ultra-compact cables represent a leap in fiber density, packing up to 6,912 rollable ribbon fibers into a single, small-diameter cable. This allows operators to scale their interconnect capacity to petabytes without requiring extensive new ductwork or cable pathways, a crucial advantage in both new builds and retrofits. The design also accounts for the intense thermal and safety demands unique to AI deployments, ensuring performance is maintained even in high-temperature environments.
From Silica to Socket: A Vertically Integrated Approach
A key differentiator STL emphasizes is its complete, end-to-end control over the manufacturing process. The company is one of a few globally that is fully vertically integrated, from creating the ultra-pure glass preform from raw silica, to precision-drawing the optical fiber, to cabling it into advanced products like the Celesta IBR series, and finally, delivering fully tested, connectorized assemblies.
This deep integration allows for a level of precision and quality control that is difficult to achieve with a fragmented supply chain. It ensures that every component is optimized to work together, from the core of the fiber to the final plug.
"AI demands a level of precision and density that traditional cabling simply cannot meet," said Ankit Agarwal, Managing Director at STL, during the announcement. "With STL Neuralis, we are providing the high-speed, low-latency foundation that allows GPU clusters to perform at their peak, moving complexity out of the field and into a controlled, high-precision factory environment."
Bolstering the US AI Supply Chain
The launch is not just a technological statement but also a strategic one. All Neuralis products destined for the North American market will be supported by STLOC's state-of-the-art manufacturing facility in Lugoff, South Carolina. This commitment to domestic production is a significant advantage in an era marked by global supply chain volatility and a growing emphasis on securing critical national infrastructure.
For U.S.-based hyperscalers and "Neoclouds" providers, a domestic supply chain for this foundational technology means greater predictability, reduced lead times, and the ability to work more closely with the manufacturer to customize solutions. In the high-stakes race to build out AI infrastructure, the ability to deploy faster and more reliably can be a decisive competitive edge.
As the industry grapples with grid-level power shortages and multi-year wait times for essential electrical components, optimizing every other part of the data center build process becomes paramount. By providing a robust, scalable, and rapidly deployable connectivity backbone made in the U.S., STL is positioning Neuralis as a critical enabler for Americaβs AI ambitions. The new portfolio enters a competitive field, but its focus on the specific, urgent needs of AI and its backing by a domestic manufacturing presence may provide the leverage needed to wire the future of American data centers.
π This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise β