Skip To Content
How IREN Broke Through the AI Power Bottleneck

How IREN Broke Through the AI Power Bottleneck

  • AI growth is driving a surge in data center electricity demand. In the US, data center power use may reach 106 gigawatts by 2035, 36% higher than recent forecasts. Energy, grid access, and permitting now constrain AI expansion more than chips.

  • Large AI models require reliable, high-density power and advanced cooling. Traditional data centers cannot meet rising GPU power demands. Companies are building “AI factories,” purpose-built facilities with integrated power, cooling, and scalable infrastructure.

  • IREN signed a $9.7 billion, five-year deal with Microsoft to deliver GPU cloud capacity in Texas. The project will provide 200 megawatts within a 750-megawatt site. AI competition now depends on securing clean energy and scalable, power-ready infrastructure.

Summary by Bloomberg AI

As AI drives a surge in data-center demand, the constraint is not chips but rather access to clean, abundant electricity and purpose-built data center infrastructure that can be deployed and expanded quickly.

What looks like a simple AI prompt on a screen is, in reality, a factory process: compute at scale powered by electricity. Multiply that across more than a billion users,[1] and energy becomes a linchpin in the AI-driven data center boom. 

That shift is now showing up in the forecasts. A surge of early-stage data center development is rapidly reshaping electricity demand around the world. In the US, for instance, data-center power consumption is on course to reach 106 gigawatts by 2035.[2] That is 36% higher than estimates made just seven months earlier. 

This is not simply a utility planning problem; it’s a constraint on the AI economy. Training and running models requires significant power delivered with high reliability, sometimes at the same time and in the same places where grids are already tight. 

Kent Draper, IREN’s Chief Commercial Officer, puts it plainly: “As models grew larger and training runs moved from tens to hundreds of megawatts, it was obvious that AI would collide with the same physical constraints we’d been navigating for years in high-density compute. 

“Permitting, grid interconnection and transmission capacity are multi-year problems. Compute demand, by contrast, is growing exponentially and compounding day by day. That mismatch is why energy and data center space are now the gating factors for AI progress.”

In effect, the winners in AI will not be determined solely by model performance. They will be determined by who can reliably secure access to power and purpose-built data center infrastructure at scale. That shift is giving rise to a new model for building compute: the AI factory.

The Rise of the AI Factory

The response from infrastructure builders is to treat AI compute less like a row of rented servers and more like an industrial system. These are “AI factories,” purpose-built facilities designed to deliver accelerated computing at scale with the power, cooling and operational design that modern GPU clusters require.

Two forces are driving that shift. First, enterprise AI is moving cloud-first as models and workloads grow beyond the practical limits of legacy data centers and on-premises environments. Nearly three-quarters of organizations now prefer a hybrid approach, combining on-premises systems with public cloud, rather than trying to build everything in-house.[3] 

Second, the infrastructure requirements are changing quickly. New GPU generations are raising performance expectations while pushing power density higher, which is accelerating adoption of liquid-cooled, high-density designs beyond the scope of traditional data centers.

Meeting those needs requires an integrated approach, from power and cooling to the way facilities are built and operated.

“We’re taking the fragmentation out of the stack through vertical integration,” Draper says. “Instead of stitching together a patchwork of third-party providers, we build at industrial scale in places where clean power is available and the grid can support growth. The goal is to make large GPU clusters faster to deploy, easier to scale and more efficient to run.”

From Concept to Customer Commitments

Proof of that this shift is showing up in the customer roster. In November, IREN announced a multi-year agreement with Microsoft valued at about $9.7 billion to deliver large-scale GPU cloud infrastructure over a five-year term.[4]

Under the deal, IREN will host phased deployments through 2026 at its liquid-cooled data centers under construction in Childress, Texas. The buildout is designed to accommodate large GPU clusters across four phases, collectively providing 200 megawatts of critical IT load within a 750-megawatt site.

Demand for compute is now driving a new wave of greenfield development, as developers look beyond traditional data-center hubs for the power, land and interconnection capacity that AI clusters require. IREN says it has positioned itself accordingly, scaling across gigawatt-class sites in Texas and Oklahoma to support phased expansions as customer demand grows.

“Given the growth we expected in the digital world and the constraints on growth in those traditional urban data center markets,” Draper says, “we purposefully sited our projects where large amounts of land and power could be secured and data center infrastructure delivered efficiently at scale.”

IREN’s agreement with Microsoft and its expansion across North America point to the new reality of the sector: as AI demand shifts from experimentation to production, the scarce asset is not only silicon. It is the ability to deliver power-ready, high-density infrastructure that can scale predictably, and do so with clean energy in the mix.