The Moral Panic Over AI's Carbon Footprint
Generative AI has acquired a peculiar moral status: the one technology people feel entitled to condemn on environmental grounds while scrolling Netflix in bed. The charge sheet is familiar: data centres drinking rivers dry; GPU clusters devouring electricity that could power cities; each ChatGPT query burning the equivalent of a small forest.
Most of it is wrong. Not harmlessly wrong in the way that myths usually are, but wrong in ways that redirect genuine concern to the wrong targets.
This panic didn’t come from nowhere: in 2019, a paper by Emma Strubell estimated that training a single large language model emitted as much carbon as five cars over their lifetimes1. That was meant to push efficiency in machine learning. Instead it became the baseline for “AI is dirty” arguments. A 2023 study on GPT-3 water usage produced visceral numbers: a 500ml water bottle for every 10 to 50 responses, depending on when and where it was deployed2. These papers were legitimate research. The problem is that over a short time, academic warnings got stripped of context by journalists, repeated by activists, and absorbed by the public without anyone asking whether the numbers still applied or what they were being compared to. The “AI drinks water bottles” narrative is technically defensible but extraordinarily misleading.
The numbers tell a more nuanced story, and one that looks very different when you put them in context.
What Your AI Session Actually Costs

Start with what a typical session actually costs. A ten-minute conversation with Gemini or Claude - five to eight back-and-forths - uses about 4,000 to 5,000 tokens. At Sam Altman’s figure of 0.34 Wh per query, that’s roughly 2 Wh of electricity3. To put that in perspective: it’s like leaving your phone screen on for ten minutes, or watching a 55-inch TV for less than a minute.
One million tokens is easier to anchor to something real: that’s about 750,000 words, or fifteen average novels. An active daily user accumulates roughly one million tokens a month. A developer using AI coding tools heavily might hit it in a single day.
One million tokens on current hardware uses 200-500 Wh, producing 40-100 grams of CO₂ on the UK grid. Here’s how that compares to other everyday activities:
| Activity | CO₂ |
|---|---|
| 10-minute AI session (~5,000 tokens) | ~0.5 g |
| 1 million tokens (text LLM) | ~40-100 g |
| 1 hour Netflix streaming | ~36-55 g |
| 1 hour PS5 gameplay | ~40-80 g |
| 1 hour Zoom call | ~17 g |
| 1 beef burger | ~2,500 g |
| London to New York return (per passenger) | ~1,700,000 g |
The framing that AI is an environmental crisis, while ignoring the beef on the plate and the flight booked for Christmas, is not analysis. It’s virtue signalling without the maths.
A Month in the Life

So how does all that stack up against everything else you actually do? Take a typical UK month: you drive a petrol car, probably eat meat, stream video, and some use AI. I mean, you obviously do more than this, but anyway.
Typical UK adult, per month, approximate CO₂:
| Activity | Monthly CO₂ |
|---|---|
| Petrol car (UK average mileage) | ~100-120 kg |
| Meat consumption | ~90-110 kg |
| Home heating (gas) | ~80-120 kg |
| Flights (annualised from typical 1-2 per year) | ~100-200 kg |
| Video streaming (2 hrs/day) | ~2-4 kg |
| AI assistant use (8 queries/day, typical) | ~0.02 kg (20 g) |
Compare that to what actually moves the needle:
| Stop doing this | Monthly saving |
|---|---|
| Eat vegan instead of meat | ~70-90 kg |
| Switch to an EV | ~80-100 kg |
| Never fly | ~100-200 kg |
| Stop streaming | ~2-4 kg |
| Stop using AI entirely | ~0.02 kg |
The pattern is obvious. Your carbon budget lives in what you eat and how you get around. Whether you use AI is almost statistically irrelevant. Eat one fewer beef burger per week and you’ll do more for the climate than quitting AI forever.
This doesn’t mean AI scale doesn’t matter - at over a billion queries a day, small per-query costs add up. But it does mean individual guilt is being misdirected.
Training, Efficiency, and the Direction of Travel
The efficiency of running AI models is improving dramatically. New hardware - H100 chips versus A100 chips from two years ago - gets ten times more work from the same energy. The Stanford AI Index found the cost to run a GPT-3.5-class model dropped 280-fold between late 2022 and late 20244. Historically, computing improves efficiency by about 100x every decade.
If that pace continues, the energy cost per token in 2030 could be a fraction of today’s. That’s even as people run more queries.
Training a big AI model is expensive. GPT-3 used about 1,287 MWh during training, producing an estimated 552 tonnes of CO₂5. GPT-4 likely needed many times that, though OpenAI has not published the figure. You’ll hear people say training one model emits as much carbon as five cars over their entire lifetimes. That’s defensible for older training runs. What gets left out: training happens once. Then the model answers billions of queries. The cost gets split across all of them.
ChatGPT serves over a billion queries a day6. Each year of that deployment generates far more emissions from running the model than the entire training process cost upfront. Multiple analyses, including research from Hugging Face and Carnegie Mellon, have found that the majority of a model’s total lifetime emissions come from inference rather than training7. Training becomes a rounding error.
Training costs are also dropping fast. DeepSeek V3 trained for $5.5 million and outperforms GPT-4, which cost over $100 million8. Better hardware, smarter data, newer techniques - they all compress the training cost while the models get better. Meta reports that its operations have been matched to 100% renewable energy since 2020, making the market-based greenhouse gas emissions for Llama training effectively zero9. The location-based emissions (what the grid actually produces) are higher, but the direction is clear: training on renewables is increasingly the norm for large labs.
It’s also profitable to be efficient! For companies like Anthropic and OpenAI, running the model is now their biggest cost. Every joule they save is money back. Cost pressure and environmental pressure are aligned - unusual for tech.
Governance
None of the above excuses the industry from scrutiny. Several specific concerns are legitimate and underserved by current discourse.
The transparency problem is real. Car companies publish fuel economy. AI companies publish nothing. Anthropic, Google, OpenAI - none of them publish how much energy their models actually use. That blocks comparison, kills accountability, and leaves regulators guessing. They should be required to report efficiency in standard terms: watt-hours or grams of CO₂ per million tokens.
What actually matters is institutional. We need transparency - real numbers on energy per token, published by every company. We need sensible siting - data centres placed where water is abundant and power is clean. We need efficiency pressure to keep working. And we need to stop comparing AI in isolation and start asking: what would happen instead? Does AI replace something energy-heavy, or just add on top?
The environmental conversation about AI is worth having. What it does not need is panic. And I for one have been on the receiving end of self-righteous condemnation just for using AI at all. Streaming Netflix for two hours a day and then telling me not to use AI is just silly in my view. It’s true that proper scrutiny is lacking, but the data we do have is not enough to demonise this technology.
Additional sources for comparison tables: Netflix streaming emissions from Greenly (2025), approximately 55g CO₂e per hour. https://greenly.earth/en-us/leaf-media/data-stories/the-carbon-cost-of-streaming
v3, 1 April 2026: Corrected water usage figures (500ml per 10-50 responses, not per query), corrected GPT-3 training emissions (552 tCO₂e), removed unsupported energy substitution argument, improved sourcing throughout. Original published 29 March 2026.
Footnotes
-
Strubell, E., Ganesh, A., & McCallum, A. (2019). “Energy and Policy Considerations for Deep Learning in NLP.” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. https://arxiv.org/abs/1906.02243 ↩
-
Li, P., Yang, J., Islam, M.A., & Ren, S. (2023). “Making AI Less Thirsty: Uncovering and Addressing the Secret Water Footprint of AI Models.” The paper estimates GPT-3 consumes a 500ml bottle of water for roughly 10 to 50 responses, depending on deployment location and timing. https://arxiv.org/abs/2304.03271 ↩
-
Altman, S. (2025). “Three Observations.” Personal blog, June 10, 2025. Reported by Data Center Dynamics. Note: IEEE Spectrum observes this figure was stated without supporting methodology and that complex queries on larger models may consume significantly more. https://www.datacenterdynamics.com/en/news/sam-altman-chatgpt-queries-consume-034-watt-hours-of-electricity-and-0000085-gallons-of-water/ ↩
-
Stanford Human-Centered Artificial Intelligence (2025). “AI Index 2025: State of AI in 10 Charts.” The cost of querying a GPT-3.5-equivalent model dropped from $20/million tokens (Nov 2022) to $0.07/million tokens (Oct 2024). https://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-charts ↩
-
Patterson, D., Gonzalez, J., Le, Q., et al. (2021). “Carbon Emissions and Large Neural Network Training.” Reports 1,287 MWh and 552 tCO₂e for GPT-3 training. https://arxiv.org/abs/2104.10350 ↩
-
DemandSage (2025). ChatGPT processes over 1 billion queries per day as of early 2025, rising to 2.5 billion by mid-2025. https://www.demandsage.com/chatgpt-statistics/ ↩
-
Luccioni, A.S., Viguier, S., & Ligozat, A.L. (2023). “Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model.” Journal of Machine Learning Research. Also: Patterson, D. et al. (2021), arXiv:2104.10350. Multiple analyses confirm inference dominates lifetime emissions for widely deployed models. ↩
-
DeepSeek V3 technical report (December 2024). GPT-4 training cost from Sam Altman’s public statement that it exceeded $100 million. https://www.analyticsvidhya.com/blog/2024/12/deepseek-v3/ ↩
-
Meta Llama 3.1 model card, HuggingFace. Meta reports matching 100% of electricity use with renewable energy since 2020, with market-based training emissions of 0 tCO₂eq. https://huggingface.co/meta-llama/Llama-3.1-8B/discussions/34 ↩
Comments
Loading comments…
Leave a comment