Is AI Bad for the Planet? We Did the Maths, and the Answer Might Surprise You
- 2 hours ago
- 4 min read
Generating a single response with an AI model uses as much energy as charging a smartphone for about three minutes; roughly the same as running an LED light bulb for 20 minutes.
So is AI an environmental villain, or is the outrage overcooked?
The answer is: it depends entirely on what you use it for. Fortunately, we can now quantify these impacts using recent research
A 2025 study by Jegham et al. (here) benchmarked the environmental footprint of 30 state-of-the-art AI models running in commercial data centres.
The graphs below show energy use, water consumption, and carbon emissions for a mid-sized query. (about 1,000 tokens in and out — tokens are small chunks of text, like words or parts of words, that AI uses to process language)



The differences between models are significant
Efficient, smaller models - such as GPT-4 mini, Llama 3.2 1B and 11B -use about 10–100 times less energy, water, and carbon per request than larger, more resource-intensive models.
At the higher end are models like DeepSeek R1, GPT o3, and GPT-4.5, which use more resources due to their increased complexity, reasoning ability, and size.
This is not a small difference: the least efficient models can use up to 65 times more energy on long prompts.
To put this into everyday terms, take GPT-4.5 as an example — one of the more resource-intensive models currently available. For a single request, its environmental impact is roughly equivalent to about 10 minutes of continuous laptop use.
The graph below combines energy, water, and carbon into a single “Environmental Score” so the various impacts can be compared more easily across activities.

This makes intuitive sense: a model is trained once (or only occasionally), but it is used millions of times. In practice, this means the environmental impact of individual queries (inference) is more relevant for everyday decision-making.
Are there any positive impacts?
It all depends on how AI is used — and that’s where the real conversation begins. AI is quickly becoming highly capable, even surpassing human performance in some areas like pattern recognition and scientific reasoning
used for harmful purposes - surveillance, autonomous weapons, or large-scale deception - its effects could be deeply damaging.
But used for drug discovery, climate modelling, materials science, or optimising energy grids, the benefits could far outweigh the costs, potentially unlocking what some describe as an ‘age of abundance’.
The emissions associated with a well-targeted AI application are not just a cost; they may be an investment with enormous returns.
So what does this mean?
Like driving a car, running a computer, or printing a document, using AI carries an environmental cost.
What sets this moment apart is that we can actually quantify that cost - per query, per model, per use case. That transparency is itself valuable: it means the environmental equation is no longer invisible, and the choices we make about how and when to use AI can be informed ones.
Here’s where it gets really interesting.
Add a single 5-kilometre car trip to the graph above, and it dwarfs everything else on the chart - hundreds of AI queries, wiped out by one short drive.
That’s not a reason to feel smug about using AI. It’s a reminder to ask a more important question: what are we actually using AI for?
If it helps someone carpool, plan a more efficient route, or even decide not to take a trip at all, then it may quickly offset its own environmental impact many times over.
The graph below makes this vivid. The logical conclusion? In the right hands, not using AI might actually be the worst environmental choice.

The tools we use have always had environmental costs - from the printing press to the personal computer. What matters is what we do with them. AI is no different, except that this time we can actually see the cost per query, compare models, and make informed decisions.
That’s not a burden. That’s an opportunity. Use it wisely.
Footnote
The term “Artificial Intelligence” originally came from early efforts to replicate human-like reasoning in computers, with the understanding that it was never truly the same as human intelligence. But that view now feels increasingly outdated.
Whether intelligence is “real” has little to do with the substrate it runs on, and everything to do with what it can accomplish. In many domains, current AI systems outperform the majority of humans; in others, they still fall short.
In our view, that puts it well past the threshold of ‘artificial’. Perhaps a more fitting term is Digital Intelligence - a genuinely new form of cognition, neither human nor imitation, but something distinct in its own right.
References:
Laptop Energy
IEA (2023). Data Centres and Data Transmission Networks. https://www.iea.org/reports/data-centres-and-data-transmission-networks
Laptop Water
IEA (2020). Water Energy Nexus. https://www.iea.org/reports/water-energy-nexus
Laptop Carbon
Ministry for the Environment (2023). Measuring emissions: A guide for organisations. https://environment.govt.nz/publications/measuring-emissions-a-guide-for-organisations-2023/
Foot-printing Inference
Jegham, N., Abdelatti, M., Elmoubarki, L., & Hendawi, A. (2025). How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference. arXiv preprint arXiv:2505.09598. https://arxiv.org/abs/2505.09598




Comments