Getting your news
Attempting to reconnect
Finding the latest in Climate
Hang in there while we load your news feed
April 29, 2026
Visakhapatnam just turned into the industry’s clearest “AI-scale or bust” signal. Reliance Industries is pitching a $17bn, 1.5GW data centre cluster in Andhra Pradesh with a captive solar-and-battery build alongside it — and that’s landing in the same city where Google’s partners have already broken ground on a gigawatt-class hub. The speed is the story… and so is the friction: local environmental clearance decisions are already being challenged on process, water, and power assumptions.
The Big Stories
Google has begun construction of a gigawatt-scale AI hub in Visakhapatnam, led by AdaniConnex and Nxtra by Airtel, with Google committing $15bn over 2026–2030 to build an AI ecosystem that includes three data centre campuses, a cable landing station, and fibre expansion under the America-India Connect initiative. It’s a full-stack play: compute, connectivity, and an explicit clean-energy angle — plus community watershed programs. The tell here is ambition at the city scale: this isn’t “another campus,” it’s an attempt to manufacture an AI region.
That same Visakhapatnam build is already running into governance scrutiny. Andhra Pradesh’s SEIAA granted environmental clearances to two Adani-owned SPVs near Vizag days before the foundation event, classifying them as Category B2 (which bypasses Union-level EIA and public hearings), covering ~601.4 acres with stated water needs of 501 KLD and a power requirement around 1,626 MW (backup ~971.5 MW). Activists and experts are pushing for reclassification and a fuller appraisal. For investors, this is the reminder that “AI hubs” are now political objects: permitting pathways, water accounting, and disclosure quality can become timeline risk — fast.
In the US, regulators are getting more explicit about who pays for the grid when a data centre shows up with nine-figure load. Wisconsin’s Public Service Commission approved revisions to We Energies’ data centre tariff requiring very large customers to fully fund generation and grid resources, extending minimum contracts to 15 years, and lowering eligibility from 500 MW to 100 MW. The PSC also rejected a 75% capacity-only payment option and flagged that some network costs sit with MISO/FERC rules. Translation: the era of socialising upgrades onto general ratepayers is getting harder — and “100 MW” is now big enough to trigger special treatment.
Michigan is offering a different flavour of the same bargain: growth for infrastructure commitments, with politics baked in. DTE Energy tied a two-year pause on seeking customer rate increases to an Oracle-backed Related Digital project in Saline Township, contingent on the first data centre being online by end-2027 and regulatory approvals being granted. The wider campus is pegged at $16bn; DTE also filed a $474.3m revenue request to support grid investments and expects ~ $9bn of electric-system improvements from two data-centre contracts through 2045. The structure is striking: the utility is effectively using data-centre capex as a lever in the rate narrative — which raises the stakes on delivery dates and the credibility of load forecasts.
Europe’s policy fight is getting sharper, too. Irish environmental groups won High Court leave to challenge the CRU’s rules allowing data centres to use fossil generation for the first six years, with an 80% renewables requirement thereafter. The challengers argue the policy breaches Ireland’s climate law and risks locking in emissions as data centres already consume 22% of national electricity — projected to exceed 30% by 2031. This isn’t just about one rule; it’s about whether “temporary” fossil backstops become the de facto permission slip for new builds in constrained grids.
Behind the Headlines
The interconnection queue has become the new chip shortage — except it’s slower, more local, and harder to brute-force. Data Center Knowledge says grid backlogs and power delivery are now the primary bottlenecks, with developers commonly facing 36–48 month timelines from commitment to delivery even as US utilities are expected to invest roughly $1.4tn through 2030 across generation, transmission, and grid capacity. That framing helps explain today’s mix of responses: Wisconsin pushes cost responsibility onto customers; Michigan tries to tie rate politics to a marquee project; and Ireland is litigating the emissions consequences of “just build a generator.” The key takeaway is that power is no longer a siting input — it’s the schedule.
If interconnection is the choke point, it’s no surprise to see genuinely odd form factors being pitched as a workaround. Mousterian (M3) and Samsung Heavy Industries say they will build liquid-cooled floating data centres on barges, targeting over 1,500 MW over three years by placing capacity adjacent to existing generation and leaning on underutilised water-cooled thermal plants. The promise is blunt: bypass 5+ year queues and compress timelines from years to quarters. It’s a moonshot, but it’s also a signal that the market will test almost anything — maritime engineering included — if it turns grid wait time into deployable megawatts.
On the supply chain side, the cooling stack is still consolidating toward “server-to-facility” liquid competence. Vertiv acquired Strategic Thermal Labs to deepen capabilities in cold-plate design, server-side liquid cooling, and thermal validation, spanning design through commissioning and ongoing ops. This is less about buying a product and more about buying credibility and speed in dense compute deployments, where the engineering edge is increasingly in integration and validation rather than brochure specs. The practical implication: as AI racks get denser, the vendors that can own the thermal chain end-to-end will keep taking wallet share — and set the integration standards everyone else has to follow.
Subscribe to Data Centres Briefings
Get AI-powered briefings delivered to your inbox