Data centre.
Photo credit: Christina Morillo/Pexels

Trying to regulate subjective AI capabilities is a losing battle. Instead, we should mirror carbon markets and introduce risk-weighted permits for the massive physical infrastructure required to build frontier models, writes Joel Christoph.

In 2005, the European Union launched the world’s largest carbon trading system. The concept was simple: set a cap on total emissions, distribute permits and let polluters trade them. Those who cut emissions cheaply could sell their surplus; those who could not had to pay. The system was imperfect at first. Permits were over-allocated, prices crashed and critics declared the experiment dead. Then regulators tightened the cap. Today, emissions covered by the EU scheme have fallen by roughly 47%.

Artificial intelligence (AI) governance is stuck where climate policy was before cap-and-trade. More than 60 countries have published national AI strategies, and hundreds of companies have signed voluntary safety commitments. Yet frontier AI development continues to concentrate in a handful of firms, safety investment remains largely discretionary, and most countries have no practical leverage over the systems that will reshape their economies. Good intentions have not changed the underlying incentives.

Carbon markets solved this problem by identifying a measurable metric to regulate. AI governance can do the same, and it has an even better measuring stick.

The input you can count

The most powerful AI systems are no longer limited by ideas alone. They are limited by chips, data centres, electricity, water and supply chains. The International Energy Agency estimates that global data centre electricity consumption reached roughly 415 terawatt-hours in 2024 and could surpass 945 terawatt-hours by 2030, comparable to Japan’s total electricity consumption. This is a shift in industrial scale, not a niche technology story.

This physical intensity is an opportunity for governance. Compute is metered, logged and billed. Cloud providers already track usage to the GPU-hour. While regulating the content or capabilities of AI models involves subjective judgments, regulating compute means working with a physical quantity that can be independently verified, much as carbon emissions can be measured at the smokestack.

How risk-weighted compute permits could work

Imagine a framework modelled on cap-and-trade. A regulator defines which training runs are high-stakes, using compute thresholds and related signals of capability. It sets an aggregate cap on the compute available for those runs within a given period. Developers obtain permits denominated in compute units.

Crucially, permits are risk-weighted. Independent evaluations inform a risk multiplier that scales the permit requirement up or down. Developers that demonstrate stronger safeguards and lower misuse risk face a lower effective obligation per unit of compute. Those who do not face a higher one. This converts safety effort from a public-relations claim into a costed input that firms can reduce by investing in evaluation, security and deployment safeguards.

Because permits are tradable, the market allocates scarce frontier compute to the highest-value uses and discovers the lowest-cost path to compliance, avoiding the rigidity of prescriptive rules that become outdated as technology evolves. Enforcement relies on stochastic audits and escalating penalties. The objective is not to police every run in real time. It is to make compliance cheaper than evasion.

A tool for global inclusion, not just control

The World Economic Forum’s AI Governance Alliance has highlighted that meaningful global AI governance requires broad participation. Yet most governance discussions remain confined to a small group of wealthy nations and large technology firms. Much of the world is currently a price taker in AI, dependent on a handful of jurisdictions for chips and cloud access.

Permit auctions generate revenue. That revenue can fund audit capacity, evaluation infrastructure and safety-relevant public goods in countries that currently lack them. Coalitions can agree on mutual recognition of audits, lowering compliance friction for responsible developers. For middle powers and emerging economies in Southeast Asia, Africa and Latin America, this reduces the false choice between ungoverned dependence and the costly, unrealistic pursuit of self-sufficiency.

Honest limits

The analogy between carbon and compute should not be pushed too far. Compute is not harm, and a permit system would be one layer in a broader governance stack that includes evaluations, incident reporting and application-specific rules. Carbon markets took years to calibrate, suffering from over-allocation and questionable offsets. A compute permit system would face its own challenges: defining which training runs qualify as high-stakes, preventing developers from fragmenting workloads to stay below thresholds, and ensuring that monitoring infrastructure does not become a tool for government surveillance of private innovation.

These are serious design problems, not reasons to abandon the approach. Financial regulators already manage analogous tensions. Risk-weighted capital requirements in banking involve the same trade-offs between measurability, gaming and enforcement. The lesson from both carbon markets and financial regulation is that imperfect pricing still outperforms no pricing at all.

A window that will not stay open

The next two to three years represent a narrow window. Compute supply chains are being reshaped by export controls, industrial policy and record capital expenditure. Governance frameworks established during this period will set the terms for a generation. If the international community waits until frontier capability is even more concentrated, the political economy of reform becomes far harder.

Carbon markets taught policy-makers a lesson that still holds: you do not need perfect control to create meaningful incentives. You need a measurable unit, clear monitoring rules and a system that makes compliance easier than evasion. AI governance can adapt that lesson. Risk-weighted compute permits will not solve every problem. But they can help the world move from slogans to scalable incentives at exactly the moment when AI capability and infrastructure are accelerating together.

  • Joel Christoph is a Fellow at Harvard Kennedy School, where his research focuses on the economics of frontier AI governance, including compute regulation, market-based security mechanisms, and international coordination. This article was originally published by the World Econmic Forum.
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Simple picture test helps AI diagnose addiction with over 80% accuracy

Diagnosing addiction is notoriously difficult due to stigma and denial, but scientists…

Rise of ‘skill-based’ hiring: AI expertise now worth more than a master’s degree

With AI skills commanding a 23% wage premium and offsetting age bias…

Decolonising the digital: How the Global South is reimagining AI

From ‘language-first’ models to public supercomputers, developing nations are moving beyond Western…

AI chatbots are stuck in the past when it comes to depicting Neanderthal life

If you ask an artificial intelligence model to show you a Neanderthal,…

AI opens new window into the brain’s vital yet unexplored ‘control centre’

The brainstem drives the body’s most essential functions — consciousness, breathing, heart…

New AI model reads brain scans in seconds with 97.5% accuracy

A new artificial intelligence model developed at the University of Michigan can…