Vertiv looks at the growing infrastructure demands of AI data centers

Vertiv's latest report points to AI workloads as a key driver of higher power density, faster deployment timelines, and more complex infrastructure needs.

Artificial intelligence is reshaping how data centers are planned, built, and run, and the pressure is coming from several directions at once. A recent report from Vertiv looks at how these forces are pushing infrastructure design in new directions, especially as AI workloads grow larger, denser, and harder to support older models.

The Vertiv Frontiers report pulls together insights from across the company to outline where data center design is heading. Rather than focusing on a single technology shift, it looks at how power, cooling, energy supply, and software tools are starting to work more closely together as AI becomes a bigger part of computing demand.

Vertiv Chief Product and Technology Officer Scott Armul said data center operators are rethinking how facilities are designed and operated as AI places new demands on both density and deployment speed. He described the industry as "continuing to rapidly evolve" in response to the needs of emerging AI factories.

Armul pointed to "cross-technology forces" driven by extreme densification, which he said are shaping decisions around infrastructure. Higher-voltage DC power designs and advanced liquid cooling are becoming more important as operators look to support "gigawatt scaling" for AI workloads. He also noted that on-site energy generation and digital twin tools are expected to help data centers scale more quickly and support broader AI adoption.

At the core of the report are several forces shaping today's data centers. AI and high-performance computing are pushing power density higher than many facilities were built to handle. At the same time, new sites are being rolled out faster and at a much larger scale. Facilities are increasingly being treated as single units designed to deliver compute, rather than collections of separate systems. The growing mix of chips used in AI tasks complicates standardization.

These pressures are already affecting how data centers handle power. Many facilities still rely on a mix of AC and DC systems that require several conversion steps, which can become less efficient as rack densities rise. Higher-voltage DC designs reduce current, shrink conductor size, and limit conversion stages by shifting more power handling to the room level. As standards and equipment mature, these designs may become more common, particularly in sites that use on-site generation or microgrids.

Power, energy, and cooling take center stage

AI workloads are also becoming more distributed. While large investments have gone into centralized facilities that support widely used AI tools, not every organization can rely on shared infrastructure. Industries with strict requirements around data location, security, or response times may need private or hybrid environments. In those cases, upgrading existing sites or building smaller, high-density facilities closer to users may be required, supported by flexible power and cooling systems.

Energy supply is another growing concern. Backup power has long been part of data center design, but limits on grid capacity are now pushing some operators to plan for longer periods of self-sufficiency. On-site generation using gas turbines and similar systems is one response. Strategies that bring power and cooling directly to a site are becoming part of broader planning discussions, driven largely by availability constraints.

Speed is also shaping design decisions. As AI systems grow more complex, operators are under pressure to deploy them faster. Digital twin tools allow teams to model entire facilities in software before construction begins, making it easier to integrate IT systems and physical infrastructure. These designs are often delivered as prefabricated modules, and the report says this approach can cut the time needed to bring systems online by up to half.

Finally, cooling is no longer a secondary concern. Liquid cooling is increasingly necessary for dense AI hardware, and AI tools may also help manage those systems. With more sensors and control software, cooling setups could spot issues earlier and adjust conditions in real time, helping protect costly hardware and reduce downtime as AI workloads continue to grow.