Celestica and AMD Forge 'Helios' to Challenge AI Infrastructure Giants
- $200 billion: Projected global spending on AI infrastructure by 2028
- 75%: Enterprises embedding AI into core operations
- Late 2026: Customer availability of the 'Helios' platform
Experts view the Celestica-AMD 'Helios' platform as a strategic alternative to proprietary AI infrastructure, leveraging open standards to mitigate vendor lock-in and improve supply chain resilience.
Celestica and AMD Forge 'Helios' to Challenge AI Infrastructure Giants
TORONTO and SANTA CLARA, Calif. – March 16, 2026 – In a significant move to reshape the burgeoning artificial intelligence hardware market, Celestica and AMD today announced a strategic collaboration to launch "Helios," a new rack-scale AI platform. The partnership aims to provide a powerful, open-standards alternative for deploying AI at scale, combining AMD’s high-performance computing prowess with Celestica’s deep expertise in data center infrastructure and advanced manufacturing.
The "Helios" platform, slated for customer availability in late 2026, represents a direct response to the explosive demand for AI processing power that is straining existing supply chains and driving a multi-hundred-billion-dollar infrastructure arms race.
A New Blueprint for AI Deployment
At the heart of the collaboration, Celestica will lead the research, design, and manufacturing of the platform's critical scale-up networking switches. These components are essential for creating a high-speed fabric that interconnects AMD’s next-generation Instinct™ MI450 Series GPUs, which are optimized for the immense computational demands of large-scale AI training and inference clusters.
The entire "Helios" architecture is built upon the Open Compute Project (OCP) and Open-Rack-Wide (ORW) form-factor. This commitment to open standards is a key differentiator, promising greater interoperability and flexibility for customers. The networking switches will leverage the Ultra Accelerator Link over Ethernet (UALoE) architecture, an open standard for scale-up connectivity designed to facilitate high-speed communication between accelerators.
“Deploying AI at scale requires infrastructure that can be delivered quickly, consistently, and with the performance customers expect,” said Steven Dorwart, senior vice president and general manager, Hyperscalers, Celestica. “Our collaboration with AMD on the ‘Helios’ platform brings together our global engineering, manufacturing, and supply chain capabilities with AMD’s innovation in high-performance computing. Together, we are accelerating access to AI systems optimized for the most demanding workloads of the next era.”
AMD echoed this vision, positioning "Helios" as a foundational shift in how AI infrastructure is built and deployed.
“‘Helios’ represents a new blueprint for AI infrastructure, enabling customers to deploy AI at scale with the performance, efficiency, and flexibility required for the next generation of workloads,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. “We are pleased to work with Celestica, leveraging their expertise in delivering leading-edge networking switch technology with AMD’s leadership in high-performance and AI computing.”
Navigating the Competitive AI Arms Race
The Celestica-AMD partnership enters a fiercely competitive arena dominated by established players and well-funded challengers. Nvidia, the current market leader, has built a powerful ecosystem around its DGX systems and is already previewing its next-generation "Vera Rubin" platform. Meanwhile, Intel is aggressively pushing its Gaudi AI accelerators, and hyperscale cloud providers like AWS and Google are investing heavily in their own custom silicon, such as Trainium and Tensor Processing Units (TPUs), to optimize their AI services.
Market analysts project staggering growth in this sector. Projections indicate that global spending on AI infrastructure could surge past $200 billion by 2028, with some forecasts predicting hyperscaler spending alone could approach $700 billion in 2026. This gold rush is fueling the creation of massive, dedicated "AI factories"—data centers built from the ground up for AI workloads.
In this landscape, the "Helios" platform's open-standards approach offers a compelling strategic alternative. By adhering to OCP specifications, AMD and Celestica aim to mitigate the risks of vendor lock-in that can come with proprietary, vertically integrated solutions. This could appeal to large enterprises, research institutions, and even rival cloud providers looking to build more flexible, cost-effective, and diverse AI infrastructure without being tied to a single supplier's ecosystem.
The Strategic Power of Open Standards and Integrated Manufacturing
The decision to build "Helios" on OCP standards is more than a technical choice; it is a strategic one. OCP was founded to create modular, efficient, and scalable hardware for data centers, and its principles are now being applied to the unique challenges of AI. The immense heat and power consumption of modern GPUs necessitate advanced rack designs, liquid cooling solutions, and high-density power distribution—all areas where OCP standards like Open Rack V3 (ORV3) are driving innovation.
This collaboration also highlights Celestica's transformation from a contract manufacturer to a design-centric platform integrator. The company is not merely assembling components; it is providing a fully functional, validated rack-scale system. This involves deep expertise in high-speed networking, advanced thermal management including liquid cooling, and the complex power infrastructure required to support thousands of power-hungry accelerators.
By taking on the R&D and design of the networking fabric, Celestica occupies a critical control point in the AI data center stack. This integrated approach simplifies deployment for customers, who receive a pre-validated, plug-and-play system, thereby reducing time-to-value. This capability is particularly crucial as the complexity of AI hardware outpaces the integration capabilities of many organizations.
Building a Resilient Supply Chain for Explosive Demand
One of the most significant challenges facing the AI industry is supply chain fragility. The unprecedented demand for accelerators and networking components has led to long lead times and allocation constraints. The joint announcement explicitly targets this issue, aiming to "improve supply chain resiliency."
The open nature of the "Helios" platform contributes to this goal by fostering a more diverse ecosystem of component suppliers. Furthermore, Celestica's global manufacturing footprint and expertise in supply chain logistics enable the delivery of regionally built, fully integrated AI racks. This can help de-risk supply chains and accelerate deployment for organizations across cloud, enterprise, and research sectors.
As companies race to embed AI into their core operations—a trend now seen in over 75% of enterprises—the ability to acquire and deploy compute infrastructure quickly and reliably has become a primary competitive advantage. By combining AMD's next-generation silicon with Celestica's proven ability to deliver complex systems at scale, the "Helios" platform is positioned to be a critical enabler for organizations looking to build their own AI capabilities and compete in an increasingly intelligent world.
