Top Story

IT Sustainability Think Tank: Counting the cost of AI datacentres and their energy use

The expansion of artificial intelligence (AI) datacentres has forced a long-overdue reckoning in the technology sector. What was once framed as an internal efficiency challenge for cloud providers has become a visible pressure on national energy systems, local communities (particularly in resource-stressed regions), and enterprise sustainability strategies.

This is not a theoretical debate. Across the UK and other mature markets, AI datacentres are being fast-tracked through planning systems, prioritised for grid connections, and framed as strategic national assets.

At the same time, governments are relying heavily on AI-enabled productivity gains to deliver economic and public-sector reform, often without fully accounting for the infrastructure and energy trade-offs involved.

Microsoft’s recent call for a “community-first” approach to AI infrastructure is therefore timely. It reflects burgeoning awareness amongst the hyperscalers that AI growth risks eroding public trust if they fail to address the technology’s energy and environmental impacts.

But it also exposes a deeper issue: if AI infrastructure is now treated as strategic national capability – and subject to the same scrutiny as other forms of critical infrastructure – then responsibility for its impacts can no longer sit with suppliers alone, nor be absorbed by the wider energy sector without debate.

The real cost of AI infrastructure is no longer hidden

For much of the past decade, cloud economics allowed energy consumption to be abstracted away from enterprise decision-making. Hyperscalers invested at scale, efficiencies improved, and sustainability narratives focused on relative gains versus on-premise infrastructure.

AI changes that equation, because its workloads change the scale, timing, and concentration of energy demand. Unlike earlier waves of cloud adoption, AI infrastructure drives sustained high-intensity compute, exacerbates peak demand pressures, and accelerates the need for grid reinforcement and transmission upgrades.

This is already visible in the prioritisation of AI Growth Zones and datacentres for grid access (at a time when local electricity systems may already be under strain). Public investment in energy infrastructure has always underpinned economic development, and AI datacentres are increasingly framed as critical national infrastructure. However, once AI infrastructure becomes visible at this level, the question of who pays becomes unavoidable… and simply treating AI growth as a public good does not absolve private actors of responsibility.

Hyperscalers do invest heavily in renewable energy procurement and efficiency (for instance, AWS’s early achievement of renewable electricity matching targets and Microsoft’s investments in carbon removal and water stewardship). Yet corporate-level commitments do not automatically extend to local grid upgrades, peak-load balancing, or network resilience costs; and the reporting of global averages can be used to hide local policy discrepancies.

Credibility in climate and energy policy now depends less on ambition and more on implementation. Shifting costs off balance sheets does not make them disappear; it simply obscures who bears them.

This challenge is intensified by diverging national policy regimes as well. In the US, AI infrastructure is now explicitly framed as a national security and economic priority, with environmental and planning constraints treated as barriers to acceleration. In the UK and Europe, however, AI growth is being layered onto energy systems already constrained by decarbonisation targets and public accountability.

US-headquartered hyperscalers therefore operate across fundamentally different policy environments, with sustainability commitments that must travel across borders even when their home markets do not enforce equivalent constraints (a divergence that matters for UK suppliers, integrators, and enterprise buyers operating across global AI supply chains).

Environmental responsibility cannot be outsourced to hyperscalers alone

Decisions about model size, training frequency, workload placement, and operating schedules all materially affect energy consumption, and hyperscalers have responded with efficiency gains, renewable energy procurement, and public commitments to net-zero operations.

However, focusing responsibility solely on big tech firms risks missing a deeper structural issue: whilst AI infrastructure is built by hyperscalers, its ultimate shape is determined by how enterprises choose to use it.

Many organisations still treat AI as an abstract capability layered onto existing cloud contracts, rather than as a new class of infrastructure (one with distinct sustainability implications that encompass water usage and land constraints as well as energy supply). Some AI workloads are latency-sensitive and energy-intensive; others are batch-based and far more flexible. Some require GPUs running at full capacity; many do not.  

Decisions made by enterprise IT leaders about architecture, deployment models, and operating patterns directly influence energy consumption and environmental impact. Treating these dynamics as someone else’s problem is no longer tenable. In practice, this means that environmental responsibility must be shared across infrastructure providers, policymakers, and enterprise users.

The environmental implications are even more pressing when the AI application in question is itself designed to enable system optimisation and energy efficiency, as there must be a provable environmental net benefit to choosing a particular sustainability tech solution.

Governance, not goodwill

If AI datacentre growth is to be supported through public-private cost-sharing (whether for grid reinforcement, planning acceleration, or clean power generation), it must be governed explicitly.

Without transparency, cost-sharing risks becoming cost-shifting – particularly in energy systems already under pressure (a risk amplified by the fragmented state of governance across the energy sector).

The UK’s evolving approach to social value in public procurement offers a useful parallel. Over time, social value has moved from aspiration to measurable expectation, with suppliers required to demonstrate additionality rather than rely on generic corporate commitments.

Sustainability commitments for AI infrastructure will need to follow a similar trajectory to prevent tech firms from hiding behind general global statements that fail to make a dent at the local level, where the environmental impacts of AI growth are being felt.

Microsoft’s call to arms is notable not because it proposes a fully formed solution, but because it acknowledges that unmanaged AI infrastructure growth will create social backlash. From a policy perspective, this raises difficult trade-offs.

Prioritising AI datacentres for grid connections may support economic growth (albeit with longer-term net effects as yet unproven), but it can also crowd out decarbonisation efforts. Funding grid reinforcement to support AI growth may be rational, but only if benefits are demonstrably shared rather than captured by the few.

For enterprises, the implication is clear: energy responsibility will increasingly show up in procurement, regulation, and reporting expectations. Sustainability is no longer confined to corporate operations; it extends into the infrastructure choices embedded in digital transformation programmes.

What enterprise IT leaders should be doing now

For CIOs and IT leaders, the question is not whether AI will increase energy and environmental pressure, but how to plan for it without undermining sustainability commitments. Three priorities stand out, all of which can be built into procurement criteria:

  • Make AI workloads visible. Many organisations lack granular insight into where AI workloads run, how energy-intensive they are, and how they interact with broader cloud consumption. Treating AI as “just another cloud service” is no longer sufficient. Integrated climate intelligence platforms, rather than siloed reporting tools, are thus becoming essential.
  • Treat architecture as a sustainability decision. Choices around model size, training frequency, deployment location, and workload scheduling all have energy implications. In some cases, shifting workloads away from peak demand periods or consolidating models can materially reduce environmental impact without sacrificing business value.
  • Change the supplier conversation. Enterprise buyers should demand greater transparency from hyperscalers and integrators around energy use, grid impact, and environmental trade-offs. Corporate net zero commitments alone are no longer sufficient proxies for responsible AI delivery; what matters is how AI services are delivered in practice.

These steps do not absolve hyperscalers of responsibility, but they recognise that enterprise demand is a powerful lever too.

A more honest conversation about AI growth

The global race to scale AI infrastructure is now being shaped by policy regimes willing to trade environmental constraint for speed and dominance. In the US, that trade-off is explicit; in the UK, although it’s the government’s mission to ”make Britain a clean energy superpower”, the environmental consequences of AI are arriving regardless. AI workloads run on physical infrastructure, draw power from real grids, and impose costs that someone must absorb. Unexamined assumptions about where environmental impacts land are no longer acceptable; but neither is treating them as a purely technical issue, framed as an ethical failure of Big Tech alone.

The constraint here is not technological capability, but coordination and governance (the same structural barriers shaping the UK’s clean energy transition more broadly). AI infrastructure is being layered onto a system already undergoing structural transformation, one in which further growth inevitably involves trade-offs.

The uncomfortable reality is that AI growth creates costs that must be paid by someone. If those costs are hidden, deferred, or externalised, then trust in both technology and sustainability agendas will erode. If they are acknowledged, governed, and shared transparently, AI infrastructure can evolve in ways that support both long-term economic and environmental resilience.

Microsoft’s call for a community-first approach should therefore be read less as reassurance, and more as a warning. In a world where AI growth is being actively accelerated by geopolitics, sustainability will not be delivered by intent alone. It will depend on who is prepared to insist on accountability – in procurement, in architecture, and in policy.

For enterprise IT leaders, this means moving beyond simplistic narratives and acknowledging that AI ambition and sustainability credibility are now inseparable. The technology may be central to future productivity and growth, but intelligence at scale is not free. The question is no longer whether we can afford to power AI responsibly… it is whether we can afford not to.

Related Articles

Leave a Reply

Back to top button