ASUS Tackles AI's Heat Crisis with Advanced Liquid Cooling
- $4.8 billion: Global data center liquid cooling market in 2025, projected to surge to $27.1 billion by 2035.
- 76%: Expected adoption of liquid cooling for AI servers by 2026, up from 15% in 2024.
- 1.18 PUE: Power Usage Effectiveness achieved by ASUS's liquid-cooled AI supercomputer for Taiwan's NCHC, significantly better than typical air-cooled facilities (1.4â1.8).
Experts agree that liquid cooling is becoming essential for sustainable AI infrastructure, offering superior efficiency and scalability as traditional air cooling fails to meet the thermal demands of next-generation hardware.
ASUS Tackles AI's Heat Crisis with Advanced Liquid Cooling
TAIPEI, Taiwan â February 26, 2026 â As the artificial intelligence revolution charges forward, the data centers powering it are facing a fundamental crisis of thermodynamics. ASUS today stepped into the breach, announcing a comprehensive suite of Optimized Liquid-Cooling Solutions and a strategic partner framework designed to cool the super-hot, high-density hardware that will define the next era of AI and high-performance computing (HPC).
The initiative directly targets the escalating thermal challenges posed by next-generation systems, such as the forthcoming NVIDIA Vera Rubin NVL72, where traditional air cooling is no longer a viable option. By moving from air to liquid, ASUS aims to enable unprecedented compute density while drastically improving energy efficiency and lowering the total cost of ownership for data center operators worldwide.
The Inescapable Heat of AI
The move by ASUS is not just a product launch; it's a direct response to a burgeoning market necessity. The global data center liquid cooling market, valued at approximately $4.8 billion in 2025, is projected to surge to an astonishing $27.1 billion by 2035. This explosive growth is almost entirely fueled by the voracious energy demands of AI workloads. While AI servers utilizing liquid cooling constituted just 15% of deployments in 2024, that figure is expected to leap to 76% by 2026.
The reason is simple physics. Modern AI accelerators and CPUs generate immense heat, pushing rack power densities well beyond 120 kWâa threshold where air cooling becomes profoundly inefficient and economically unsustainable. Projections show these densities could spike towards one megawatt per rack in the near future. Data centers, which already account for 1.5% of global electricity consumption, are on an unsustainable trajectory. Cooling systems alone can represent over 30% of a facility's total energy draw, making efficiency a paramount concern for both economic and environmental reasons.
Liquid cooling, once a niche technology for bespoke supercomputers, is now becoming a foundational requirement for mainstream AI infrastructure. It offers a far more efficient medium for heat transfer, allowing operators to pack more computational power into a smaller footprint without risking thermal throttling or hardware failure.
A Flexible Arsenal for Thermal Warfare
Recognizing that no two data centers are alike, ASUS is launching a multi-pronged portfolio under its "Trusted AI, Total Flexibility" banner. The solutions are designed to be scalable and adaptable, catering to new builds and retrofits alike.
The core offerings include:
Direct-to-Chip (D2C) Cooling: This highly efficient method applies liquid cooling directly to the hottest components, such as CPUs and GPUs. By using customized cold plates to extract heat at the source, D2C technology maximizes thermal performance, boosts hardware longevity, and dramatically cuts energy use.
In-row CDU-based Cooling: For large-scale deployments, ASUS is offering solutions built around in-row Coolant Distribution Units (CDUs). These units, capable of managing up to 100 kW of heat per rack with a roadmap to 200 kW, efficiently circulate coolant throughout entire server rows, providing a robust and scalable architecture for enterprise and hyperscale AI clusters.
Hybrid and Liquid-to-Air Configurations: To bridge the gap for existing facilities, ASUS provides hybrid systems that integrate liquid cooling with existing air-based infrastructure. These Liquid-to-Air solutions absorb heat via liquid loops within the server and then dissipate it using air, offering a flexible and cost-effective upgrade path without requiring a complete overhaul of a data center's facility-level cooling.
Strength in Numbers: A Strategic Cooling Ecosystem
Perhaps the most significant aspect of the announcement is ASUS's strategic partner framework. Rather than attempting to solve this complex challenge alone, the company has assembled a coalition of global infrastructure leaders and component specialists to create a validated, end-to-end ecosystem.
This framework includes industry giants Schneider Electric and Vertiv, who bring decades of expertise in data center power, management, and large-scale cooling infrastructure. Their involvement ensures that ASUS's server-level solutions can be seamlessly integrated into a facility's broader thermal and power management systems. This collaboration is critical for reducing deployment complexity and accelerating time-to-market for enterprises building out their AI capabilities.
At the component level, ASUS is leveraging the precision engineering of specialists like Auras Technology and Cooler Master. These partners, already recognized as lead suppliers for NVIDIA's next-generation liquid-cooled platforms, provide the critical cold plates, manifolds, and distribution units that form the backbone of the D2C and CDU systems. Their focus on achieving ultra-low Power Usage Effectiveness (PUE) values aligns perfectly with the sustainability goals of the initiative.
Real-World Efficiency: The NCHC Supercomputer
To prove the real-world viability of its technology, ASUS highlighted its recent deployment for Taiwan's National Center for High-performance Computing (NCHC). ASUS designed and engineered the nation's first fully liquid-cooled AI supercomputer, a dual-compute system featuring both NVIDIA HGX H200 and GB200 NVL72 clusters.
This flagship installation achieved an exceptional Power Usage Effectiveness (PUE) of just 1.18. PUE is a standard measure of data center efficiency, with a perfect score of 1.0 representing a system where 100% of power is delivered to the IT equipment. Typical air-cooled facilities often have PUEs ranging from 1.4 to 1.8, meaning a significant portion of energy is wasted on cooling. The NCHC's 1.18 PUE demonstrates a dramatic reduction in energy consumption and operational cost.
This level of efficiency has profound implications, translating directly into lower electricity bills, a reduced carbon footprint, and a more sustainable approach to high-performance computing. It also helps data centers meet increasingly stringent regulations, such as Germany's Energy Efficiency Act, which mandates a PUE of 1.3 or lower by 2027. Furthermore, the waste heat captured by liquid cooling systems can be repurposed for district heating or industrial processes, turning data centers into contributors to a circular energy economy.
ASUS will be showcasing these advanced liquid-cooling solutions as a Diamond Sponsor at the upcoming NVIDIA GTC 2026 conference in San Jose, California, from March 16-19, offering a glimpse into the cooler, more powerful future of artificial intelligence.
