Eoptolink Unveils 12.8T Optic to Cool AI's Insatiable Data Needs
- 12.8 Tbps: Eoptolink's new optical transceiver achieves a record-breaking 12.8 Terabits-per-second throughput.
- 400W heat dissipation: Each XPO module can dissipate up to 400W of heat using integrated liquid cooling.
- 204.8 Tbps density: The technology enables a front-panel density of 204.8 Tbps in a standard 4-RU rack, a fourfold improvement over current 1.6T OSFP solutions.
Experts view the introduction of Eoptolink's 12.8T liquid-cooled optical transceiver and the XPO MSA as a critical industry shift, offering a standardized, high-density solution to the thermal and bandwidth challenges of next-generation AI data centers.
Eoptolink Unveils 12.8T Optic to Cool AI's Insatiable Data Needs
LOS ANGELES, CA β March 12, 2026 β As the artificial intelligence boom strains the world's digital infrastructure, a consortium of technology leaders today unveiled a new weapon in the battle against data bottlenecks and soaring energy use. At the OFC 2026 conference, optical solutions provider Eoptolink Technology Inc., Ltd. announced its industry-first 12.8 Terabits-per-second (Tbps) pluggable optical transceiver, a liquid-cooled module designed to handle the immense data flows of next-generation AI clusters.
The announcement was part of a coordinated industry push, with Eoptolink joining networking giant Arista Networks and competitor TeraHop as founding members of the new XPO Multi-Source Agreement (MSA). This new alliance aims to standardize a revolutionary class of high-density, liquid-cooled pluggable optics, signaling a major shift in how future AI data centers will be built and cooled.
The AI Data Center's Breaking Point
The relentless growth of AI models has created a critical challenge for data center operators. Training large language models (LLMs) and running complex inference tasks require thousands of GPUs to exchange massive datasets at blistering speeds. This has pushed traditional air-cooled infrastructure to its physical and thermal limits.
Today's most advanced 800G optical modules can consume over 15 watts of power, and upcoming 1.6T modules are projected to exceed 30W. When thousands of these are packed into high-density racks alongside power-hungry GPUs, they create a "thermal wall" that air cooling can no longer effectively breach. This excessive heat not only drives up energy costs but also degrades performance and threatens the reliability of the entire system.
Furthermore, the physical limitations of copper cabling are becoming increasingly apparent. As GPU bandwidth skyrockets, the effective distance for reliable copper interconnects shrinks, making scalable, rack-to-rack communication a significant engineering hurdle. The industry has recognized that a fundamental change is required, moving beyond simply increasing speed to rethinking how power and heat are managed at the component level.
A Liquid-Cooled Revolution in a Pluggable Form Factor
Eoptolink's 12.8T XPO module is engineered to address this crisis directly. It achieves its staggering throughput by using 64 lanes, each operating at 200 Gbps. This allows for a record-breaking front-panel density of 204.8 Tbps in a standard 4-RU rackβa fourfold improvement over the current 1.6T OSFP solutions. This density is critical for building the massive, interconnected fabrics that next-generation AI models demand.
The key innovation, however, is its integrated liquid cooling. Each XPO module features a built-in cold plate capable of dissipating up to 400W of heat. This allows the high-power optics to operate reliably in dense configurations without overheating, effectively breaking through the thermal wall that has constrained air-cooled designs.
"XPO modules address key challenges our customers and the industry are facing as network bandwidth requirements continue to scale," said Sean Davies, Vice President of Sales at Eoptolink, in the company's official announcement. "Liquid cooling enables higher-power optical modules while maintaining thermal efficiency, allowing much greater port density. At the same time, front-panel pluggability preserves the serviceability and deployment flexibility that operators rely on."
This preservation of a pluggable, front-panel-serviceable design is a crucial differentiator. While alternative technologies like co-packaged optics (CPO) promise efficiency gains by integrating optics directly onto silicon, they present significant serviceability challenges. The XPO standard offers a powerful, liquid-cooled solution without forcing operators to abandon the familiar and flexible pluggable module paradigm.
Standardizing the Future with the XPO MSA
Perhaps more significant than any single product is the formation of the XPO MSA itself. By launching with key partners like Arista Networks and TeraHop, and with support from hyperscale cloud providers like Microsoft, the alliance is making a strong case for an industry-wide standard. Multi-source agreements are vital for creating interoperable ecosystems, preventing vendor lock-in, and driving down costs through competition and volume manufacturing.
The XPO MSA aims to define the mechanical, electrical, and thermal specifications for this new class of optics. According to its charter, the standard is designed for maximum flexibility, supporting a wide range of optical interfaces from short-reach to long-haul coherent, as well as copper and even RF-Microwave applications. It also provides for different interface types, including power-saving linear optics that eliminate retimer chips, which is a key focus for reducing the overall power budget of AI clusters.
This collaborative approach is essential for gaining the trust of data center operators, who must make infrastructure bets that will last for years. By establishing a clear, multi-vendor roadmap for high-density liquid-cooled optics, the XPO MSA provides the confidence needed for widespread adoption.
Redefining Data Center Economics
The implications of this technology extend beyond raw performance. By quadrupling the front-panel density, the XPO standard allows operators to build more powerful AI clusters within a smaller physical footprint, saving valuable data center real estate. The superior efficiency of liquid cooling also promises to significantly lower Power Usage Effectiveness (PUE), a key metric for operational costs and environmental sustainability.
As the industry gathers in Los Angeles for OFC 2026, the live demonstrations from Eoptolink and its XPO partners will be under intense scrutiny. They represent a bold step towards a future where data center infrastructure can finally keep pace with the exponential growth of artificial intelligence. While competing technologies continue to evolve, the debut of the XPO standard marks a pivotal moment, offering a practical, powerful, and standardized path forward for cooling the future of AI.
π This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise β