Getting your news
Attempting to reconnect
Finding the latest in Climate
Hang in there while we load your news feed
April 28, 2026
Oracle just made one of the boldest on-site power calls we’ve seen in the AI era: Project Jupiter will be “fully powered” by Bloom Energy fuel cells, with up to 2.45GW of installed capacity planned for its Doña Ana County, New Mexico campus. That’s not a pilot or a hedge — it’s a wholesale replacement for what had been planned as gas turbines and diesel generators. Add in the unusually explicit community package (including $50m for water systems and $360m for schools/infrastructure/services), and Jupiter is shaping up as a template for how hyperscale AI campuses try to buy social licence as much as megawatts.
The Big Stories
Oracle and its partners are trying to pre-empt the two fights that reliably stall big AI builds: power and water. Alongside the fuel-cell plan, Oracle says Project Jupiter will use non-potable industrial well water and closed-loop, non-evaporative direct-to-chip cooling, and won’t use community drinking water or drill new wells. The company also notes a one-time 960,000-gallon startup fill for the Bloom Energy microgrid, with no water during normal operations. The bet here is clear: if you can credibly decouple AI growth from both grid queue pain and municipal water anxiety, you can move faster — and get yelled at less.
Debt markets are still willing to underwrite AI capacity when there’s a credible offtake story attached. Hut 8 is selling bonds to help finance a planned $3bn, 245MW data center in St. Francisville, tied to a long-term lease backed by Fluidstack and supported by Google, with power under an Entergy subsidiary agreement. The detail that matters isn’t just the size — it’s the structure: debt appetite follows contracted (or contract-adjacent) cashflows, and that’s a signal to developers that “financeable” is increasingly synonymous with “pre-sold.”
The UK’s AI-energy narrative took a credibility hit. The government revised its Compute Roadmap to show a top emissions estimate of 123 million metric tons for AI data centres, a jaw-dropping correction from a previously cited 0.142 million metric tons, prompting MPs to demand transparency as Parliament probes AI’s energy demands against the 2050 net zero target. Investors should read this less as a spreadsheet error and more as a warning: if policymakers don’t have a firm grip on the numbers, they’ll reach for blunt instruments — and data centres are an easy target.
India’s data centre land-and-power machine keeps scaling, and real estate players want the annuity. Lodha Developers says it will develop 1GW of built-to-suit data centre at Palava, investing ₹13,000–15,000 crore (excluding land) and signing an MoU with the Government of Maharashtra for a 400-acre green data centre park, with plans to monetise land for up to 2GW capacity. The message: in high-growth markets, the competition isn’t only between operators — it’s increasingly between landowners packaging “grid-ready” campuses and selling time-to-power as the product.
And then there’s the moonshot that’s also a branding exercise. Meta agreed to access future capacity from a space-based solar power project intended to beam electricity to Earth to help power its data centres amid rising AI demand. Compared with Oracle’s on-site fuel cells, this is more option-value than near-term megawatts — but it’s also a tell that big platforms want energy narratives that sound as ambitious as their compute roadmaps, even if the grid is still doing the heavy lifting.
Behind the Headlines
Liquid cooling is quietly shifting from “special project” to operations discipline — and operators who treat it like a one-time install are going to get burned. In T5 Data Centers’ rundown of operational lessons from liquid cooling deployments, the emphasis is on coordination, documentation, training, automation, and lifecycle planning, with scenario-based training and tighter collaboration with manufacturers. What’s notable is the framing: liquid cooling isn’t presented as an exotic add-on for a few AI halls, but as operationally necessary as rack densities climb. The takeaway for executives is unglamorous but decisive: the winning operators will be the ones who productise liquid-cooling ops — sampling, alerting, maintenance workflows — the same way they productised remote hands and change control.
If you can’t measure power quickly, you can’t manage it — and AI workloads are notoriously hard to model without slowing teams down. MIT and the MIT‑IBM Watson AI Lab’s EnergAIzer estimates GPU power consumption in seconds with about 8% error, aiming to help both data centre operators and algorithm developers improve energy efficiency. That “seconds” point is the whole game: power estimation only changes behaviour if it’s fast enough to sit inside development and scheduling loops. Expect more of these lightweight, engineering-friendly tools to show up in procurement conversations as operators try to turn “efficiency” from a promise into a verifiable knob.
Europe is starting to treat grid hardware like critical infrastructure — and the implications spill into renewables-backed data centre strategies. The European Commission banned EU funding for energy projects using inverters from “high-risk” countries, a move aimed at Chinese suppliers after a December 2025 attack in Poland, even as the EU highlights roughly 100GW/year of inverter capacity and another 45GW planned by 2027. This isn’t a niche procurement rule; it’s a reminder that “clean power” supply chains are now tangled up with cybersecurity and industrial policy. For data centre developers leaning on solar-plus-storage narratives in Europe, equipment provenance is becoming a financing variable, not just an engineering detail.
Subscribe to Data Centres Briefings
Get AI-powered briefings delivered to your inbox