AEnwgbNJEFaPX4b1 GRAEnwi7NJEFaPX4b4 LEAF
Greenlyhttps://images.prismic.io/greenly/43d30a11-8d8a-4079-b197-b988548fad45_Logo+Greenly+x3.pngGreenly, la plateforme tout-en-un dédiée à toutes les entreprises désireuses de mesurer, piloter et réduire leurs émissions de CO2.
GreenlyGreenly, la plateforme tout-en-un dédiée à toutes les entreprises désireuses de mesurer, piloter et réduire leurs émissions de CO2.
Descending4
Home
1
Blog
2
Category
3
AI Efficiency: A Gemini Reality Check with Greenly
4
Media > Data Stories > AI Efficiency: A Gemini Reality Check with Greenly

AI Efficiency: A Gemini Reality Check with Greenly

IndustriesTechnology
Level
Hero Image
Hero Image
AI app with person holding mini globe behind
In this data story, we'll respond to Google's most recent report in terms of AI and Gemini.
Industries
2025-09-29T00:00:00.000Z
en-us

Methodology Disclaimer

Important: The information provided in this article, including calculations, comparisons, and estimations, is based on Greenly’s independent analysis of Google’s Gemini report, Mistral’s LCA, publicly available studies, and our own internal modelling tools. These figures are intended to contribute to discussions around the environmental impact of AI models, not to provide definitive or exhaustive measurements.

The numbers rely on multiple assumptions. Real-world impacts vary significantly depending on the complexity of prompts, the efficiency of data centres, hardware lifespans, and whether emissions are reported using market-based or location-based methodologies.

As the AI industry evolves rapidly, these figures should be seen as indicative rather than absolute. New efficiency gains, hardware advances, changes in data centre operations, and the release of further transparency reports may shift results considerably. While the exact numbers will continue to change, this analysis provides a useful approximate snapshot of the scale of AI’s environmental footprint today and highlights the urgent need for clearer, standardised, and independently verified methodologies.


With the exponential growth of AI in the last few years and our almost daily reliance on it, the environmental impact of this technology has become a hot topic, with concerns over both its carbon footprint and water usage. 

Over the last few years, we’ve seen a mix of studies - both from independent third parties and from the tech giants themselves - that aim to quantify these impacts. Google is the most recent company to release a study on the impact of its language model. 

Their technical paper focuses on the energy, emissions, and water use associated with Gemini prompts. According to the study, a single median Gemini prompt consumes 0.24 Wh of energy, produces 0.03 gCO₂e (market-based), and uses 0.26 mL of water. On the surface, these figures suggest a significantly smaller environmental impact compared to other open language models - something Google attributes to years of efficiency improvements and cleaner energy sourcing. However, the comparison with other AI models is not quite so straightforward - something we will demonstrate in this article. 

At Greenly, we welcome the release of their technical paper as a step towards greater transparency within the industry, but also believe the discussion highlights a deeper need: clear, comparable, and externally verified methodologies to measure the true environmental impact of AI.

Google’s Gemini Report

Google’s technical report is undoubtedly a step in the right direction. The AI industry is relatively new and extremely fast evolving, which has meant that reporting on environmental impacts has been spotty and difficult to compare. Google’s latest report will hopefully encourage other market players to calculate their own impacts. 

Google’s Methodology

Google’s study looks at the energy use, emissions, and water consumption linked to serving Gemini prompts. The methodology is relatively comprehensive and covers four main areas:

  • Comprehensive measurement boundary: Google measures energy use across four categories under its operational control:
  • Active AI accelerator energy
  • Active CPU and DRAM energy
  • Idle machine energy
  • Overhead energy (data center infrastructure)
  • Energy measurement methodology: Energy consumption is measured at the level of serving Gemini prompts and reflects the material energy sources under Google’s control within its data centers.
  • Emissions and water measurement methodologies: Google calculates emissions using market-based emission factors for electricity, reflecting its renewable energy purchases, and reports water usage associated with data center cooling.
  • Choice of an aggregate representative metric: The results are reported based on a “median Gemini Apps text prompt”, intended to represent a typical Gemini query.
AI chip

Limitations and Missing Data

While Google’s study is an important step towards greater transparency, it’s important to note that there are a number of limitations that make the results harder to interpret and compare with other AI models:

  • Market-based vs location-based emissions: Google reports emissions using a market-based approach (reflecting renewable energy purchases), resulting in a headline figure of 0.03 gCO₂e per median prompt. However, under a location-based approach (which reflects the actual grid mix where data centers operate), emissions would be closer to 0.09 gCO₂e per prompt - roughly three times higher.
  • Unspecified prompt assumptions: Google reports impacts for a “median Gemini Apps text prompt” but doesn’t specify the prompt’s length, complexity, or token count. This makes comparisons with other models difficult and could underestimate energy, emissions, and water use for more complex queries. Using the median also has mixed implications: it avoids skewing results with very large prompts but can hide the higher footprint of tasks like multi-step reasoning or image generation.
  • Efficiency claims: Google highlights significant efficiency gains, stating that Gemini’s per-prompt energy use dropped 33x and emissions 44x in just one year. While impressive, these claims should be treated with caution. Google hasn’t published its 2024 baseline data or explained its calculation methodology, making it impossible to independently verify these numbers.
  • Scope exclusions: The study excludes several significant factors, including external and data center networking (all networking before a prompt reaches Google’s AI infrastructure), energy used by end-user devices such as phones and laptops, LLM training impacts, and hardware manufacturing and end-of-life emissions. Without these, the figures don’t reflect Gemini’s full lifecycle footprint.
  • Data transparency gaps: The study doesn’t disclose key information such as total prompt volumes, cumulative water usage, or the share of Gemini’s emissions relative to Google’s overall footprint. Without this context, it’s difficult to understand the true scale of Gemini’s impact.
  • No independent review: Google’s study hasn’t been externally audited or peer-reviewed. While it references some older sources (dating back to 2011–2017), these are used for comparison rather than as inputs to the methodology. Without third-party validation, however, the results remain difficult to fully confirm.

These aspects make cross-model comparisons challenging, and it means that claims that Gemini consumes significantly less water and energy, and produces a lower carbon footprint, are not necessarily as straightforward as they appear.

On a More Positive Note…

Despite these limitations, there are a number of encouraging developments worth highlighting:

  • Hardware and software innovations: Google discloses improvements in hardware design, software optimization, and clean energy procurement that have helped lower impacts.
  • More comprehensive boundaries: The inclusion of CPUs, DRAM, idle capacity, and data center overheads is positive, as many studies overlook these areas. (By comparison, for example, Meta only measures GPU energy.)
  • Acknowledging limitations: To Google’s credit, the report openly recognizes many of its own exclusions and methodological constraints, which sets a useful precedent for greater transparency across the industry.

A push for transparency: While not perfect, the publication of per-prompt figures is an important step forward and could encourage other AI companies to release similar data.

artificial intelligence on typewriter

How Does This Compare to Mistral’s Recently Published LCA?

In July 2025, Mistral AI released one of the first externally audited, peer-reviewed lifecycle analyses of an AI model, conducted with Carbone 4 and ADEME. Their results show a much higher rate of energy and water consumption than that of Google’s Gemini, prompting some to compare the two. 

However, the comparison is not straightforward, as the studies are based on very different types of queries: Google reports impacts for a ‘median text prompt’ of unspecified length, while Mistral measured the generation of a full page of text (around 400 tokens).

This already explains part of the gap between the two results, alongside the methodological differences:

Aspect Google Gemini Study Mistral AI LCA
Energy per prompt 0.24 Wh (median, unspecified prompt length) Not disclosed
Carbon per prompt 0.03 gCO₂e (market-based) 1.14 gCO₂e (400-token response, location-based)
Water per prompt 0.26 mL 45 mL
Scope Inference only; excludes training Includes training + inference
Methodology Internal study, not peer-reviewed Full LCA, ISO 14040/44 compliant
Audit status Internal Externally audited (Carbone 4, ADEME, Resilio, Hubblo)
Transparency Limited prompt specification Discloses token counts, assumptions, and sub-scope details

These differences in approach highlight how methodological differences make direct comparisons between AI models unreliable. The gap largely reflects different accounting choices and units, not necessarily an efficiency gulf. If Gemini were restated using location-based emissions, and if prompt length were aligned, its per-prompt figure would likely rise. 

The takeaway: without harmonized prompt specs and clear system boundaries, direct comparisons will remain misleading.

What About ChatGPT? 

As the world’s most widely used AI application, and with over 700 million weekly active users, no discussion about the carbon footprint of AI would be complete without mentioning ChatGPT. So how does the language model compare to Gemini in terms of energy usage and related emissions?

Close
AI's climate cost youtube thumbnail

According to Sam Altman, ChatGPT’s CEO, the average ChatGPT query uses only 0.34 Wh - only slightly higher than the 0.24 Wh that Gemini claims to use, which would mean that the resulting carbon footprint would be similarly low. But does this claim hold up to scrutiny? 

A number of independent studies suggest that this number is, in fact much higher:

  • A 2024 study on BLOOM-7B (7 billion parameters - much smaller than that of GPT-4, which some estimate to be based on over 1.8 trillion parameters) found an average energy consumption of around 0.1 Wh per query. 
  • Another analysis of BLOOM-176B (176 billion parameters - which is more similar in scale to GPT-3) estimated an average of 4 Wh per query. This is more than 11 times higher than the figure quoted by Sam Altman, despite the fact that the parameters are about 10 times lower than the estimated numbers for the updated GPT-4 model. 

These studies suggest that the real per-query energy use is likely far higher, even after taking into consideration software optimizations and efficiency improvements. 

Based on the trends observed in the BLOOM studies referenced above, Greenly calculated the energy use of a single ChatGPT query (170 output tokens). Our estimate shows it consumes over 20 Wh of energy - around 83 times higher than Altman’s claim. 

This highlights once again the need for standardized methodology and independent verification - without which the true energy footprint of AI models will remain unclear.

laptop open

Greenly’s Estimated Figures for Gemini

Using our internal Greenly calculator, we also estimated the energy use and emissions for a 400-token prompt - a scenario we chose to align with the use case assessed in Mistral’s LCA.

For a model like Gemini 2.5 Pro, we estimate:

  • Energy consumption: 14.6 Wh per prompt
  • Emissions: 7.4 gCO₂e per prompt

These figures are closer to Mistral’s reported impacts, which makes sense given that Gemini 2.5 Pro is estimated to use around four times more parameters than Mistral Large 2 (500 billion vs 123 billion).

Note: our estimate does not account for potential internal optimisations by Google, which could reduce Gemini’s actual footprint.

gemini app on iphone

Why the Industry Needs Standardization 

AI has seen phenomenal growth in recent years, and this is not expected to slow down. Data centers are projected to expand by 28% by 2030, and AI’s energy demands are rapidly rising, with estimates suggesting it could account for 3 to 4 % of global electricity consumption by the end of the decade. Carbon emissions linked to AI are also expected to double between 2022 and 2030, amplifying its environmental footprint. This is exactly why it's so important for the industry to accurately and effectively measure its carbon footprint. 

At Greenly, we believe that for the AI sector to effectively work towards a more sustainable model, more transparency and cross-sector alignment are needed. Specifically: 

  • Common reporting frameworks: comparable methodologies for energy, emissions, and water footprints.
  • Independent verification: third-party auditing to build trust and credibility.
  • Granular disclosures: including prompt length, model size, location-based emissions, and embodied hardware impacts.
  • Lifecycle transparency: covering not just inference, but also training impacts and hardware manufacturing emissions.

Until then, reported figures will continue to vary widely, as Gemini’s 0.03 gCO₂e per prompt and Mistral’s 1.14 gCO₂e demonstrate. Google’s Gemini paper is a welcome development, but it also highlights why the industry needs collaboration and transparency to build a consistent, science-based standard for measuring AI’s environmental footprint.

As other AI giants (including Anthropic and OpenAI) publish their own studies, this framework can be refined and strengthened, helping to create more reliable and comparable reporting across the sector. 

AI app folder on iPhone

Greenly Tips: How to Reduce Your AI Footprint

While the industry works towards better standardization and transparency, there are small steps individuals and businesses can take to reduce their day-to-day impact when using AI:

  • Choose smaller models when possible: If the task doesn’t require high complexity, opt for lighter versions (eg, models labelled “mini” or “nano” in ChatGPT).
  • Write concise prompts and request short answers: Generating fewer tokens means lower energy use and emissions.

Skip the polite intros: An unnecessary ‘hello’, ‘please’, or ‘thank you’ just adds unnecessary processing and increases the AI’s workload.

Close
greenly short demo youtube thumbnail

What About Greenly?

If reading this data story on the environmental impact of AI platforms such as Gemini has inspired you to consider your company’s own carbon footprint, Greenly can help.

At Greenly we can help you to assess your company’s carbon footprint, and then give you the tools you need to cut down on emissions. We offer a free demo for you to better understand our platform and all that it has to offer – including assistance on how to reduce emissions, optimize energy efficiency, and more to help you get started on your climate journey.

Learn more about Greenly’s carbon management platform here.

Close
greenly
Sources

Cornell https://arxiv.org/abs/2508.15734

Mistral AI https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai

CNBC https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html

Sam Altman https://blog.samaltman.com/the-gentle-singularity

Carnegie Mellon University https://arxiv.org/pdf/2311.16863

Exploding Tropics https://explodingtopics.com/blog/gpt-parameters

Estimating the Carbon Footprint of Bloom Study https://arxiv.org/pdf/2211.02001

The UN Agency for Digital Technologies https://www.itu.int/hub/2022/09/how-to-reduce-the-carbon-footprint-of-advanced-ai-models/

Ohio Today https://www.ohio.edu/news/2024/11/ais-increasing-energy-appetite

More Articles

View all