India’s AI data centre boom is real, but execution, not announcements, will decide outcomes

Gigawatt-scale commitments surge, but delivery timelines, partner depth, and infrastructure readiness will separate intent from reality.

Server. Server racks in server room cloud data center. Datacenter hardware cluster. Backup, hosting, mainframe, mining, farm and computer rack with storage information. 3d illustration India is in the middle of its largest-ever data centre expansion cycle. Between March 2025 and April 2026, operators announced roughly 30 large projects across the country, collectively adding about 3.5 GW of planned capacity.

On paper, the numbers are significant. Andhra Pradesh and Telangana alone account for over 2 GW, driven by AI‑centric mega campuses in Visakhapatnam and Hyderabad. Maharashtra remains the deepest and most competitive market by project count, while Chennai and Noida continue to attract hyperscale and enterprise‑led buildouts.

What stands out, however, is a different reality beneath the surface.

India’s data centre expansion is no longer just a capacity story. It is an execution story, and increasingly, a channel story. What appears to be a unified national boom is, in reality, a two-speed market. A small set of AI-first mega projects still years away from delivery, and a broader base of phased hyperscale campuses where execution risk, partner capability, power readiness, and cooling infrastructure determine outcomes.

Announced capacity outpaces delivery on ground

At a state level, announced capacity remains highly concentrated. Andhra Pradesh leads due to a single 1 GW AI-native campus planned in Visakhapatnam, followed by Telangana’s GPU-heavy builds around Hyderabad.

Maharashtra dominates in terms of project volume, with multiple campuses under development across Mumbai and Navi Mumbai. Tamil Nadu and Uttar Pradesh continue to attract investments through Chennai and Noida, respectively.

(Note: Capacity figures reflect publicly announced ultimate campus or IT load targets as disclosed by operators; commissioning is typically phased over multiple years.)

A large share of this capacity is not operational. Across markets, facilities are being commissioned in phases, with timelines in several cases extending to 2028 and beyond.

In many instances, particularly across large, multi‑phase campuses, less than 20 percent of announced capacity is currently live.

This gap between announced and operational capacity is where execution becomes critical. Chennai’s AI-ready campuses may announce 45 to 130 MW, but initial deployments begin in single-digit megawatt phases.

Kolkata’s greenfield developments start at 16 MW, with full buildouts stretching over multiple years. Even mature markets like Noida and Navi Mumbai rely on sequential delivery rather than large-scale, immediate commissioning.

For the channel ecosystem, this phased execution matters more than headline capacity.

AI infrastructure raises execution complexity and partner dependency

What also stands out in this cycle is how sharply the definition of “data centre ready” has changed.

Nearly half of all announced capacity now markets itself as AI‑focused or AI‑ready, but these labels hide large differences in technical maturity.

(Note: AI‑focused capacity includes facilities explicitly designed or marketed by operators for high‑density GPU or AI workloads; classifications are based on operator disclosures.)

AI-centric campuses in Hyderabad and Visakhapatnam are designed for high GPU density, advanced cooling architectures, and large-scale power availability. Some facilities target rack densities far beyond traditional enterprise deployments, aligning with future GPU roadmaps rather than current demand.

In contrast, expansions in Mumbai, Noida, and Chennai remain largely general-purpose hyperscale builds, with AI workloads deployed as dedicated zones rather than core design principles.

This distinction is critical. AI data centres operate under tighter constraints, where power redundancy, thermal efficiency, network latency, and GPU lifecycle management directly impact performance and cost structures. Infrastructure decisions define the viability of the entire deployment.

As a result, execution increasingly shifts towards specialised partners.

High-density cooling, liquid immersion systems, GPU cluster integration, and high-speed networking require capabilities that most operators do not fully maintain in-house.

For Indian partners, this expands the opportunity beyond traditional infrastructure roles into AI-specific deployment, integration, and lifecycle management. At the same time, it also introduces a higher bar for execution, contributing to extended project timelines.

Sustainability further adds to this complexity.

Renewable energy integration, water efficiency, and low PUE targets are now baseline requirements, particularly for hyperscaler workloads. Delivering these commitments requires long-term coordination across power infrastructure, storage, and operations, again favouring experienced partners.

India’s data centre expansion is not slowing, but it is becoming more execution intensive. Capacity announcements will continue to scale, but the real differentiator will be the ability to convert planned megawatts into operational infrastructure on time.

For the channel ecosystem, this cycle rewards depth, not breadth. Expertise in AI infrastructure, high-density deployments, and phased execution models will define relevance.

The next phase of India’s data centre growth will not be led by who announces the largest capacity, but by who can deliver it and who can enable that delivery at scale.

Editor’s note: Capacity figures represent a mix of IT load and total campus power as disclosed by operators. Timelines reflect announced project phases, not guaranteed commissioning dates.