As AI Power Demands Surge, Smarter Power Electronics Will Be Essential

As artificial intelligence becomes deeply embedded in everyday operations and industries, the demand for data centres and the electricity needed to power them is skyrocketing. Experts predict that the energy consumption of this sector could double by 2030 and climb as high as 1,200 terawatt-hours (TWh) by 2035. 

In response, major tech companies, such as Google, Microsoft, and Meta, have increased emissions over the past year to meet the soaring computing demands of AI. This trajectory poses a serious challenge to achieving near-term net-zero climate goals.

To maintain pace, technology giants and data centre operators are racing to build new facilities, tap into available grid capacity, and explore alternative energy sources, including advanced options such as fusion and geothermal power. But beyond building more infrastructure, a critical question remains: How can we make data centres more energy-efficient once they’re built?

One promising opportunity is associated with power delivery, more specifically, how electricity is regulated and distributed within servers.

At the core of each server is the Graphics Processing Unit (GPU), the driving force behind AI computation. However, supporting the GPU are numerous other components, chief among them voltage regulators and power delivery systems, which are responsible for ensuring stable and precise power flows. These components are essential but come at a cost: they can occupy 70–90% of the server blade’s surface area and are a major source of heat. This heat, in turn, demands more cooling infrastructure, further increasing energy consumption. By improving power delivery systems, we could reduce the number of heat-producing components, ease cooling requirements, and optimise server density, all of which would lower overall energy use.

Daanaa is one particular company developing a game-changing innovation: the Power Transaction Unit (PTU). This programmable, multi-functional power module is designed to handle complex power conversions within a single, compact system. The PTU leverages near-field reactive electromagnetic manipulation to enable highly efficient, single-step voltage conversions, achieving ratios as extreme as 3V to 800VDC with over 95% efficiency. By consolidating what would normally require multiple components, the PTU can be placed directly beneath the GPU, freeing up space on the server blade, reducing ohmic losses, and reducing heat generation.

While Daanaa’s technology has clear advantages for AI-focused data centres, it also holds potential for other sectors, particularly solar energy and electric vehicles (EVs). In solar applications, integrating the PTU into solar panels can enhance performance and reliability, reduce hot spots, and improve system monitoring. Distributed power electronics like this can also lower the risk of overheating and system failure.

For EVs, especially those using Vehicle-Integrated Photovoltaics (VIPV), Daanaa’s PTU can manage curved panel surfaces, adjust to changing light and shading, and optimise battery configurations. Its bidirectional capabilities allow batteries to run in parallel instead of series, improving fault isolation and system resilience.

Looking Ahead - The Cooling Challenge

As AI drives a rapid expansion of high-performance computing infrastructure, innovation in power electronics will be critical. Solutions like Daanaa’s PTU offer a pathway to reduce heat, improve efficiency, and free up space, both physically and energetically, in the data centre.

Moreover, technologies that streamline uninterruptible power supplies (UPS) and other energy management systems can reduce the footprint of supporting infrastructure, leaving more room for revenue-generating servers and reducing the total energy consumed.

As AI transcends niche applications and becomes foundational to digital infrastructure, the supporting power‐electronics ecosystem must evolve in step. Simply scaling up compute is no longer enough: power conversion, distribution, dynamics, cooling, and grid interactions now form critical design domains. Organisations that proactively embrace smarter power‑electronics architectures will gain advantages in cost, performance, reliability and sustainability. Conversely, failure to do so risks stranded infrastructure, inefficiencies, coupling to grid constraints, and slower innovation cycles. For the AI revolution to scale sustainably, the next frontier isn’t just in building smarter software or faster chips—it’s in rethinking how we deliver and manage power at every level.

Next
Next

What’s Driving Cleantech in 2025?