Building AI Factories: The Blueprints for the AI Revolution
- Power Density: Modern AI hardware racks demand up to 120 kW, compared to traditional data center racks consuming 5-10 kW. - Cooling Efficiency: New designs support a Technology Cooling System (TCS) loop temperature of 45°C for enhanced cooling. - Digital Twin Integration: AVEVA’s software simulates power distribution, thermal dynamics, and airflow in virtual AI factories before physical construction.
Experts agree that the collaboration between Schneider Electric, NVIDIA, and AVEVA provides a critical blueprint for designing energy-efficient, high-performance AI infrastructure, addressing the unprecedented power and cooling demands of next-generation AI systems.
Building AI Factories: The Infrastructure Powering the Revolution
SAN JOSE, CA – March 16, 2026 – As the artificial intelligence boom reshapes industries, a colossal challenge is emerging from behind the scenes: the immense physical and energy demands required to power the AI revolution. The world’s insatiable appetite for generative AI and large language models is driving the need for a new class of hyperscale data centers, or "AI Factories," that consume power on a scale previously unimagined. Addressing this critical bottleneck, energy technology leader Schneider Electric, in a deepening collaboration with chip giant NVIDIA and industrial software firm AVEVA, has unveiled a comprehensive blueprint to design, build, and operate these gigawatt-scale facilities.
Announced at NVIDIA’s GTC 2026 conference, the initiative provides a validated roadmap for the power, cooling, and operational management of next-generation AI infrastructure, aiming to accelerate deployment while maximizing energy efficiency and performance.
Taming the Power-Hungry Beast of AI
The core of the challenge lies in the staggering power density of modern AI hardware. While traditional data center racks typically consume between 5 and 10 kilowatts (kW), NVIDIA’s latest rack-scale systems, such as the current Blackwell GB200 NVL72, demand upwards of 120 kW. The newly announced collaboration directly targets the next generation, with Schneider Electric unveiling one of the first reference designs for the forthcoming NVIDIA Vera Rubin NVL72 platform.
This is not a simple upgrade; it represents a fundamental rethinking of data center architecture. The validated blueprint provides a crucial roadmap for handling the extreme power and cooling requirements of these systems. Key innovations include a shift to a higher supply voltage of 480 VAC for more efficient power distribution and support for a higher Technology Cooling System (TCS) loop temperature of 45°C, which enhances cooling efficiency.
These designs address the new reality of AI clusters where GPU racks, which require immense power, are grouped tightly together. The reference architecture allows for separate, higher voltage to be delivered specifically to these GPU racks, enabling larger, more powerful clusters while optimizing power delivery across the entire facility. The goal, as defined by the partners, is to maximize "token revenue per megawatt," a new metric of efficiency for the AI era. By designing for different operating points, such as NVIDIA's efficiency-focused MaxQ mode, operators can optimize their computing performance even when facing power constraints.
Designing the Future in a Digital Universe
Before a single piece of concrete is poured or a cable is laid, the AI Factory of the future will be built, tested, and perfected in the digital realm. A cornerstone of the collaboration is the integration of a new lifecycle digital twin architecture, developed by AVEVA, directly into the NVIDIA Omniverse DSX Blueprint. NVIDIA Omniverse is a platform for creating and operating 3D-rendered, physically accurate virtual worlds, making it the ideal environment to simulate these incredibly complex facilities.
This integration allows stakeholders to assemble a complete virtual model of an AI factory. AVEVA’s software then runs multi-domain simulations on this digital twin, validating everything from power distribution and thermal dynamics to airflow performance and control systems under realistic operational conditions. Engineers can rapidly evaluate multiple scenarios, optimize layouts for cooling, and verify system performance before committing to costly physical construction. The result is a performance-optimized design that dramatically reduces engineering cycles, minimizes risk, and improves deployment speed—a critical factor in the race to bring AI capacity online.
“As AI workloads scale in both size and complexity, the margin for error in data center design becomes incredibly small,” said Manish Kumar, Executive Vice President, Secure Power & Data Centers at Schneider Electric. “Delivering AI at scale requires tightly integrated electrical, cooling and digital architectures that can support both unprecedented performance demands while maintaining peak energy efficiency.”
This sentiment was echoed by NVIDIA, highlighting the symbiotic nature of the partnership. “Gigawatt-scale AI factories demand a fundamentally new class of energy-efficient and highly predictable infrastructure,” stated Vladimir Troy, vice president of AI infrastructure at NVIDIA. “Together, NVIDIA and Schneider Electric are providing the power, cooling, and digital twin architectures needed to accelerate time-to-token for our customers worldwide.”
The Dawn of the Autonomous Data Center
Beyond design and construction, the collaboration extends to the complex task of day-to-day operations. Modern data centers generate a deluge of alarms and operational data, often overwhelming human operators and leading to "alarm fatigue," where critical alerts can be missed. To combat this, Schneider Electric announced it is successfully testing NVIDIA’s Nemotron large language model to power a new agentic AI for alarm management.
This marks a significant leap toward autonomous operations. Instead of simply flagging an issue, the agentic AI is designed to autonomously analyze data streams from multiple systems, diagnose the root cause of an alarm, and recommend specific corrective actions. By leveraging a suite of integrated tools and working alongside expert technicians, the AI aims to deliver faster, more consistent issue resolution, reduce unnecessary service dispatches, and ultimately enhance the resilience of the entire facility. This milestone reinforces a commitment to redefine asset performance through AI, creating a smarter, self-optimizing brain for the data center's physical body.
This series of announcements builds on a long-standing partnership that has seen the companies align on key technologies, from ETAP’s electrical modeling integration into Omniverse to joint support for the Alliance for OpenUSD, which promotes interoperability for 3D digital twins. By combining NVIDIA's leadership in accelerated computing, Schneider Electric's expertise in energy management and critical power, and AVEVA's industrial software prowess, the collaboration presents a formidable, end-to-end ecosystem. This integrated approach, from the silicon chip to the cooling system to the management software, provides a powerful answer to the immense challenges of building the physical foundation for our AI-driven future.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →