The AI Power Revolution: 800VDC Architecture to Save Billions

📊 Key Data
  • $4M–$8M in CapEx savings per 10-megawatt data center build with 800VDC architecture
  • 50%–80% reduction in copper cabling needed
  • 96%+ end-to-end efficiency with HVDC systems vs. up to 18% loss in traditional AC systems
🎯 Expert Consensus

Experts agree that 800VDC architecture is the inevitable solution for AI data centers, offering significant cost savings, efficiency gains, and sustainability benefits over legacy AC systems.

2 months ago
The AI Power Revolution: 800VDC Architecture to Save Billions

The AI Power Revolution: 800VDC Architecture to Save Billions

MORGAN HILL, Calif. – February 10, 2026 – As the artificial intelligence boom accelerates, the data centers that power it are hitting a fundamental wall: energy. Now, a new technical white paper from Silicon Valley developer Enteligent argues for a radical redesign of data center power infrastructure, a shift from traditional alternating current (AC) to a high-voltage direct current (DC) architecture that could save operators billions of dollars and prevent the AI revolution from being throttled by its own energy demands.

The comprehensive paper details how an 800-volt DC (800VDC) power system can dramatically increase efficiency, slash energy waste, and reduce the staggering costs associated with building and running the next generation of AI and GPU-centric data centers. This proposed shift represents one of the most significant changes to data center infrastructure in decades, moving from a century-old power standard to one purpose-built for the digital age.

The Billion-Dollar Bottleneck

For years, the silent workhorse of the digital world has been the AC-powered data center, designed for an era of computing with vastly different needs. These legacy systems were engineered for server racks consuming between 3 to 12 kilowatts (kW) of power. Today, the landscape has been irrevocably altered by AI. Modern AI training clusters, packed with powerful GPUs, routinely demand power densities exceeding ten times that amount, pushing past 50 kW and rapidly approaching 100 kW per rack.

This exponential leap in power consumption has stretched traditional AC architecture to its breaking point. The process of converting high-voltage AC from the grid down to the low-voltage DC used by servers involves multiple, inefficient steps. Each conversion stage wastes energy as heat, compounding cooling costs and creating thermal instability. To deliver the immense power required by an AI rack, operators using AC systems must install thick, expensive, and unwieldy copper cables, often in multiple parallel runs, which inflates material costs and complicates facility design.

“As AI racks push significantly beyond 50 kW and move toward 100 kW and higher, legacy AC power architectures become increasingly difficult to scale,” noted Mark Vena, CEO and Principal Analyst at SmartTech Research, in a recent statement. “High-voltage DC distribution is gaining traction because it aligns power delivery with modern compute requirements while improving efficiency and reliability, providing long-term and cost-efficient infrastructure economics.”

A Direct Current Solution

Enteligent’s white paper champions an 800VDC system as the inevitable solution. By converting grid power to 800VDC at the facility's edge and distributing it directly to the racks, this architecture eliminates most of the wasteful intermediate AC-to-DC conversion steps. The result is a streamlined, more efficient power train. While traditional systems can lose up to 18% of total power before it ever reaches the server components, HVDC systems can achieve end-to-end efficiencies of over 96%.

This efficiency has a direct impact on one of the most significant physical constraints in data centers: copper. Based on the principle that higher voltage allows the same amount of power to be transmitted with lower current (P=V×I), 800VDC systems require dramatically smaller conductors. The research indicates that this shift can reduce the amount of copper cabling needed by 50% to 80%. This not only lowers upfront material costs but also frees up valuable space within the data center and simplifies the physical installation process. Fewer conversion stages also mean less waste heat is generated within the server aisles, directly reducing the immense energy burden of cooling systems and improving the overall Power Usage Effectiveness (PUE) of the facility.

The Economic Imperative

The financial implications of this technological shift are profound. According to the analysis presented in the white paper, adopting an 800VDC architecture can yield between $4 million and $8 million in capital expenditure (CapEx) savings for every 10-megawatt data center build. These savings are derived from the reduced need for copper, as well as the elimination and consolidation of complex upstream AC components like switchgear and uninterruptible power supplies (UPS).

Beyond the initial build, the operational expenditure (OpEx) benefits are equally compelling. The paper projects an 8% to 12% reduction in annual energy-related operating costs. For hyperscale operators managing facilities that consume as much electricity as a small city, these efficiency gains translate into millions of dollars in savings annually and a significant reduction in their carbon footprint. The simplified power train, with fewer components, also promises higher reliability and lower long-term maintenance costs.

“For hyperscale, GPU, and AI-first data centers, HVDC power is the required foundation for the next decade of compute growth,” said Sean Burke, CEO at Enteligent. “The significant benefits across both CapEx and OpEx will be enormous as data centers scale into multi-megawatt and gigawatt operating domains.”

An Industry-Wide Power Play

This transition is not happening in a vacuum. Enteligent’s proposal is part of a broader, industry-wide movement to standardize a new power paradigm for the AI era. The push for standardization is critical for ensuring interoperability, driving down component costs through mass production, and giving operators the confidence to invest in the new architecture.

Major industry players are already aligning behind the shift. Schneider Electric, a global leader in energy management, is actively collaborating with Enteligent and partnering with the Open Compute Project (OCP) and Current OS to formalize DC architecture standards. “We’ve reached a turning point where power architecture is just as vital as compute power,” said Chris Evanich, New Electrical Distribution Leader at Schneider Electric. “The industry shift toward 800VDC power systems is driven by the need to increase power density inside the AI rack, which can lead to even better efficiency in the power train.”

The OCP, a collaborative community dedicated to redesigning data center hardware, has a sub-project focused on LVDC adoption, with its own white paper expected in early 2026. This effort, combined with the advocacy from GPU giant NVIDIA, which is designing its next-generation rack architectures to leverage 800VDC, signals a powerful consensus. The move to a DC-powered future is no longer a question of if, but when, as the industry collectively works to build a sustainable and scalable foundation for the next wave of artificial intelligence.

Product: AI & Software Platforms Battery Storage
Metric: Financial Performance Operational & Sector-Specific
Sector: AI & Machine Learning Clean Technology Energy Storage
Theme: Decarbonization Generative AI Digital Infrastructure Artificial Intelligence Data-Driven Decision Making
Event: Partnership
UAID: 15158