· Technology

The Moral Panic Over AI's Carbon Footprint

A Note on This Article

This is a long essay. Read it if you care about the numbers behind AI’s environmental impact, or if you’re tired of hearing the same moral panic repeated without evidence. I wrote it because the public conversation has become disconnected from reality: people condemn AI’s carbon footprint while streaming video for hours, flying annually, eating meat daily. They’ve absorbed panic from headlines and headlines from other people’s panic, with no one checking the math.

What you’ll find here are numbers. Real ones, sourced and verified. Some of them are uncomfortable (training costs are real). Most of them are reassuring (they’re smaller than you think, falling fast, and dwarfed by the things that actually drive your carbon footprint). But the point isn’t reassurance. It’s accuracy. Your personal decision to use or not use AI almost certainly doesn’t matter for your carbon footprint. The institutional decisions about how to power data centres, how to measure efficiency, whether to site infrastructure in water-stressed regions — those matter enormously. This article tries to separate those levels of analysis so you can think clearly about which conversations are actually worth having.


Generative AI has acquired a peculiar moral status: the one technology people feel entitled to condemn on environmental grounds while scrolling Netflix in bed. The charge sheet is familiar. Data centres drinking rivers dry. GPU clusters devouring electricity that could power cities. Each ChatGPT query burning the equivalent of a small forest.

Most of it is wrong. Not harmlessly wrong in the way that myths usually are, but directionally wrong in a way that distorts where genuine concern should be directed. The numbers tell a more nuanced story, and one that looks very different when you account for what AI replaces rather than simply what it consumes.

What Your AI Session Actually Costs

Data centre

Start with what a typical session actually costs. A ten-minute conversation with Gemini or Claude — five to eight back-and-forths — uses about 4,000 to 5,000 tokens. At Sam Altman’s figure of 0.34 Wh per query, that’s roughly 2 Wh of electricity1. To put that in perspective: it’s like leaving your phone screen on for ten minutes, or watching a 55-inch TV for less than a minute.

One million tokens is easier to anchor to something real: that’s about 750,000 words, or fifteen average novels. An active daily user accumulates roughly one million tokens a month. A developer using AI coding tools heavily might hit it in a single day. One million tokens on current hardware uses 200-500 Wh, producing 40-100 grams of CO₂ on the UK grid. Here’s how that compares to other everyday activities:

ActivityCO₂
10-minute AI session (~5,000 tokens)~0.5 g
1 million tokens (text LLM)~40–100 g
1 hour Netflix streaming~36–55 g
1 hour PS5 gameplay~40–80 g
1 hour Zoom call~17 g
1 beef burger~2,500 g
London to New York return (per passenger)~1,700,000 g

The framing that AI is an environmental crisis, while ignoring the beef on the plate and the flight booked for Christmas, is not analysis. It is aesthetic prejudice dressed as ethics.

But here’s where it gets important: not all AI is created equal. Text inference — chatting with an LLM — is cheap. Image generation costs sixty times more per query. Video generation costs even more than that. The problem is that people talk about “AI’s carbon footprint” as if asking ChatGPT a question is the same as generating a photorealistic video. It’s not. Not even close.

The Training Cost: Real, But Diminishing

Training a big AI model is expensive. GPT-3 used about 1,287 MWh during training — equivalent to 500 tonnes of CO₂. GPT-4 likely needed fifty times that. You’ll hear people say training one model emits as much carbon as five cars over their entire lifetimes. That’s defensible for older training runs. What gets left out: training happens once. Then the model answers billions of queries. The cost gets split across all of them.

And that changes everything. ChatGPT serves roughly one billion queries a day. Each year of that deployment generates about 25 times more emissions from running the model than the entire training process cost upfront. That ratio gets bigger every month. Meta, AWS, and Google have all published analyses showing that 60-90% of a model’s total lifetime emissions come from running it, not training it2. Training becomes a rounding error.

Training costs are also dropping fast. DeepSeek V3 trained for $5.5 million and outperforms GPT-4, which cost $100 million with roughly three times as many parameters3. Better hardware, smarter data, newer techniques — they all compress the training cost while the models get better. Meta’s Llama 3.2 produced about 240 tonnes of CO₂ equivalent during training, with almost all power from renewables4. Nearly carbon-neutral.

The takeaway matters. Training gets the headlines. But training is becoming a smaller slice of total AI emissions every year. Within a few years of launch, a model’s running costs dwarf what it cost to build. Worrying about training instead of operations is like obsessing over the carbon footprint of manufacturing your car while ignoring all the fuel you’ll burn driving it.

A Month in the Life: Putting LLMs in the Carbon Ledger

Television streaming

Numbers on their own don’t mean much. What matters is how they stack up against everything else you actually do. Take a typical UK month: you drive a petrol car, eat meat, stream video, and use AI.

Typical UK adult, per month, approximate CO₂:

ActivityMonthly CO₂
Petrol car (UK average mileage)~100–120 kg
Meat consumption~90–110 kg
Home heating (gas)~80–120 kg
Flights (annualised from typical 1–2 per year)~100–200 kg
Video streaming (2 hrs/day)~2–4 kg
AI assistant use (8 queries/day, typical)~0.02 kg (20 g)

Look at that AI figure: 0.02 kg. It’s not just small. It’s invisible. Not using AI at all saves the same carbon as not driving for eight minutes.

Compare that to what actually moves the needle:

Stop doing thisMonthly saving
Eat vegan instead of meat~70–90 kg
Switch to an EV~80–100 kg
Never fly~100–200 kg
Stop streaming~2–4 kg
Stop using AI entirely~0.02 kg

The pattern is obvious. Your carbon budget lives in what you eat and how you get around. Whether you use AI is statistically irrelevant. Eat one fewer beef burger per week and you’ll do more for the climate than quitting AI forever.

This doesn’t mean AI scale doesn’t matter — at a billion queries a day, small per-query costs add up. But it does mean individual guilt is being misdirected. And misdirection has a real cost: it sucks attention away from choices that actually matter.

Inference, Scale, and the Economics of Efficiency

Airplane in flight

Here’s where it gets interesting. The efficiency of running AI models is improving dramatically. New hardware — H100 chips versus A100 chips from two years ago — gets ten times more work from the same energy. The Stanford AI Index found the cost to run a GPT-3.5-class model dropped 280-fold between late 2022 and late 20245. Historically, computing improves efficiency by about 100x every decade.

If that pace continues, the energy cost per token in 2030 could be a fraction of today’s. That’s even as people run more queries.

There’s a good reason for this: it’s profitable. For companies like Anthropic and OpenAI, running the model is now their biggest cost. Every joule they save is money back. Cost pressure and environmental pressure are aligned — unusual for tech. That’s powerful.

Then there’s the grid. Major cloud companies have committed to running on renewable power. If AI shifts to regions with cheap wind or hydro, or gets scheduled to run when renewables are abundant, carbon per query drops without any hardware changes. That’s happening, and mostly nobody talks about it.

The Substitution Question: What Energy Is Not Being Used?

Nobody asks this question. But it’s the only one that matters. When we evaluate the energy cost of an electric car, we compare it to petrol cars, not to staying home. When we calculate the carbon cost of a video call, we subtract the flight it replaces. AI should be evaluated the same way.

Take a concrete example: building a software project using 100 million tokens. That’s the real scope for an AI-assisted web app. It costs 30-150 kWh to run, probably around 100 kWh.

Now build the same thing with three developers over three months without AI. A typical US person uses about 88,200 kWh per year total, or about 4,400 kWh per person from the electrical grid. Three people for three months: roughly 3,300 kWh between them.

The AI version uses a fraction of that energy, even if you only partially displace the human work.

(Yes, this is apples-to-oranges — human baseline energy exists whether you’re coding or sleeping, but GPUs only draw power when in use. But even accounting for that, the gap is huge.)

A 2024 paper in Scientific Reports found humans burn 40 times more energy than an LLM to produce equivalent written output. For lighter models, it’s 1,000 times more.

There’s another category: work that just wouldn’t happen without AI. A small business owner builds a website with AI instead of not having one. A researcher synthesises a literature review in two hours instead of two weeks. Someone in a country with few doctors gets expert medical explanation without waiting months for an appointment. These aren’t replacing some human expert. The alternative isn’t human — it’s nothing. And nothing has zero carbon cost and zero utility.

The Genuine Concerns

None of the above excuses the industry from scrutiny. Several specific concerns are legitimate and underserved by current discourse.

The transparency problem is real. Car companies publish fuel economy. AI companies publish nothing. Anthropic, Google, OpenAI — none of them publish how much energy their models actually use. That blocks comparison, kills accountability, and leaves regulators guessing. They should be required to report efficiency in standard terms: watt-hours or grams of CO₂ per million tokens.

Water use gets overstated per query but is genuinely serious at scale. The problem isn’t global — it’s location. A data centre in Scotland (rain-fed) is nothing like one in Arizona (Colorado River). Both might report similar water numbers, but one is drawing on scarcity. Building AI in water-stressed regions should be watched closely.

Reasoning models are different animals entirely. GPT-o3 and DeepSeek R1 run internal chains before giving you an answer. They cost 30-50x more energy per query than a normal chat. One o3 task on a benchmark used 1,785 kWh — equivalent to two months of electricity for a US household. Don’t conflate that with asking ChatGPT a question.

Finally: the growth curve. Even if efficiency keeps improving (and it should), total tokens are growing faster. AI isn’t replacing old infrastructure. It’s new demand on top of everything else. The question is whether renewables build fast enough to power it, and whether AI genuinely replaces energy-intensive work or just gets added on top.

Proportion

So where does this leave us? Not absolution. Not panic. Proportion.

Text AI at this scale is a manageable chunk of digital energy. Smaller per query than streaming, dwarfed by flying, invisible next to food systems. Efficiency is improving. The incentives point the right direction. Substitution effects (what AI replaces) are real but unaccounted for in public debate.

For you personally, unless you’re generating videos or running reasoning models constantly, AI is not moving your carbon needle. A burger is heavier than a prompt. A flight is heavier than an essay. Your footprint is set by food and travel, not how often you chat with a bot.

What actually matters is institutional. We need transparency — real numbers on energy per token, published by every company. We need sensible siting — data centres placed where water is abundant and power is clean. We need efficiency pressure to keep working. And we need to stop comparing AI in isolation and start asking: what would happen instead? Does AI replace something energy-heavy, or just add on top?

The environmental conversation about AI is worth having. What it absolutely does not need is another round of thoughtless moral panic. The self-righteousness of condemning AI while streaming Netflix for two hours is not a moral position. It is performance.


Footnotes

  1. Data Center Dynamics, “Sam Altman: ChatGPT queries consume 0.34 watt-hours of electricity and 0.000085 gallons of water.” https://www.datacenterdynamics.com/en/news/sam-altman-chatgpt-queries-consume-034-watt-hours-of-electricity-and-0000085-gallons-of-water/

  2. Embedl, “AI and UN Sustainability Goals: Reducing the Energy and Carbon Footprint of AI Models.” https://www.embedl.com/knowledge/ai-and-un-sustainability-goals-reducing-the-energy-and-carbon-footprint-of-ai-models

  3. Marmelab, “AI Carbon Footprint.” https://marmelab.com/blog/2025/03/19/ai-carbon-footprint.html

  4. IEEE Computer Society, “Sustainable Future of AI Language Models.” https://www.computer.org/publications/tech-news/community-voices/sustainable-future-of-ai-language-models

  5. Stanford Human-Centered Artificial Intelligence, “The 2025 AI Index Report.” https://hai.stanford.edu/ai-index/2025-ai-index-report

Comments

Loading comments…

Leave a comment