Getting your news
Attempting to reconnect
Finding the latest in Climate
Hang in there while we load your news feed
March 10, 2026
Nscale just pulled in an eye-watering $2B Series C at a $14.6B valuation — and it’s not being shy about what that money is for: hundreds of megawatts of GPU capacity across Europe, the US, and Asia. That kind of cheque is a reminder that “AI infrastructure” is no longer a theme; it’s a capital market with its own gravity. The catch: the physical constraints (water, power, permitting) are starting to show up in the same news cycle as the funding.
The Big Stories
Nscale’s $2B raise, led by Aker ASA and 8090 Industries, cements a new tier of AI-first infrastructure players that want to control more of the stack (“ground to cloud”) rather than just lease space and power. It matters because it normalises hyperscale-sized capex for companies that aren’t hyperscalers — and it’s a direct challenge to incumbent colo and GPU cloud models that rely on slower, project-by-project expansion.
West Virginia landed a headline-grabbing greenfield build: Penzance Management is putting $4B into the Bedington Campus, pitched as a 1.9m sq ft “High Impact Intelligence Center” delivering 600MW of IT capacity on 548 acres in Berkeley County. The governor’s office emphasised no state funding. Whether or not that holds over time (infrastructure always finds its way onto someone’s balance sheet), 600MW is the kind of number that forces grid planners and local communities to treat one site like an industrial mega-load.
The most underpriced constraint today isn’t GPUs — it’s water. A UC Riverside–Caltech study warns AI-driven data center growth could require an additional 697 million to 1.45 billion gallons/day of peak water capacity within four years, with $10B–$58B in water infrastructure costs; it also notes three major tech firms secured multi‑million‑gallon/day allocations in February 2026 for projects in Virginia, Louisiana, and Indiana. That research (and the real-world allocations behind it) is a flashing sign that “cooling strategy” is becoming “municipal utility strategy.” See: AI and data center growth could overwhelm municipal water systems.
Pennsylvania lawmakers are moving to formalise that scrutiny. The state House Energy Committee advanced two bills that would force annual reporting of water and energy use (HB2150) and create a model municipal ordinance for siting guidelines (HB2151), both on party lines. The political driver is community opposition, but the system driver is PJM: the grid operator says data centers account for nearly all projected demand growth, alongside capped capacity costs through 2030. Translation: reporting and siting rules are becoming the price of entry in constrained power markets. Story: Pennsylvania House advances bills to regulate data centers.
In the UK, Northumberland is shaping up as a serious cluster, not a one-off site. Leaflets have gone out for a second data centre proposal at West Sleekburn, with claims of 665–780 construction jobs and 460–640 operational jobs, and it sits near QTS’s Cambois project (flagged as potentially up to £10B of investment). The specifics will be tested in planning, but the direction is clear: more compute is being pushed out to regions that can assemble land, power, and political support faster than the obvious metro hubs. Story: Plans revealed for second Northumberland data centre, up to 640 jobs.
Behind the Headlines
Dell’s new hardware is a clue about where “AI compute” is migrating next. The PowerEdge XR9700 is an IP66-sealed, liquid-cooled outdoor edge server designed to run dense Edge AI and Cloud RAN workloads at fully exposed sites — without specialised cabinets or HVAC. That’s a bet that not all inference (and some telecom-adjacent workloads) will justify a trip back to the big data centre, especially when latency, backhaul cost, or site constraints dominate. If this category takes off, it quietly expands the addressable “data centre” market into streetside, industrial, and utility footprints — and shifts discussions from real estate to fleet operations and remote management.
The supply chain story is getting oddly specific: copper, not just power. Rio Tinto’s Nuton unit hit first copper production at Johnson Camp using bioleaching, and then signed a two-year deal with AWS to supply low-carbon Nuton copper to US data center component manufacturers. That combination — process innovation plus an offtake agreement tied to data centre hardware — looks like early-stage vertical coordination, not a generic ESG press release. If AI buildouts keep accelerating, inputs like copper (and who can certify its footprint) start to matter in procurement the way “renewable-backed electrons” already do. Story: Rio Tinto’s Nuton produces first copper at Johnson Camp.
Talent is now a scaling constraint, not a HR talking point. atNorth’s CFO argues the sector needs to rethink its talent model as AI infrastructure and sustainability demands ramp; the company says its workforce has tripled in three years while maintaining above-average gender diversity and employing staff from 30+ nationalities. The real takeaway is operational: when you’re scaling headcount that fast, “how you hire, train, and retain” becomes part of your delivery schedule just like transformers and switchgear. Firms that treat labour as a bottleneck early will ship capacity; everyone else will discover it mid-build. Story: Data centre industry must rethink its talent model.
Subscribe to Data Centres Briefings
Get AI-powered briefings delivered to your inbox