{"id":191169,"date":"2025-10-10T19:33:10","date_gmt":"2025-10-10T19:33:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/191169\/"},"modified":"2025-10-10T19:33:10","modified_gmt":"2025-10-10T19:33:10","slug":"google-boasts-1-3-quadrillion-tokens-each-month-but-the-figure-is-mostly-window-dressing","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/191169\/","title":{"rendered":"Google boasts 1.3 quadrillion tokens each month, but the figure is mostly window dressing"},"content":{"rendered":"<p>                                    <a class=\"article-menu__content__link\" href=\"#summary\"><br \/>\n                        <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/the-decoder.com\/resources\/icons\/summary.svg\" alt=\"summary\" width=\"27\" height=\"24\" data-no-lazy=\"1\"\/><br \/>\n                        Summary<br \/>\n                    <\/a><\/p>\n<p>Google says it now processes more than 1.3 quadrillion tokens every month with its AI models. But this headline number mostly reflects computing effort, not real usage or practical value, and it raises questions about Google&#8217;s own environmental claims.<\/p>\n<p>According to Google, it processes over 1.3 quadrillion tokens per month with its AI products and interfaces. This new brand was <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/OfficialLoganK\/status\/1976359039581012127\" data-type=\"editable-link\">announced by Google CEO Sundar Pichai at a Google Cloud event<\/a>.<\/p>\n<p>Google announced the milestone during a recent Google Cloud event, with CEO Sundar Pichai highlighting the figure. <a href=\"https:\/\/the-decoder.com\/google-processed-nearly-one-quadrillion-tokens-in-june-doubling-mays-total\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">Back in June<\/a>, Google said it had reached 980 trillion tokens, more than double May&#8217;s total. The latest jump adds about 320 trillion tokens since June, but growth has already slowed, a trend not reflected in Pichai&#8217;s presentation.<\/p>\n<\/p>\n<p>Token consumption is growing faster than actual usage<\/p>\n<p>Tokens are the smallest unit processed by large language models, similar to word fragments or syllables. A huge token count sounds like surging usage, but in reality, it&#8217;s primarily a measure of rising computational complexity.<\/p>\n<p>Ad<\/p>\n<p>THE DECODER Newsletter<\/p>\n<p>The most important AI news straight to your inbox.<\/p>\n<p>\u2713 Weekly<\/p>\n<p>\u2713 Free<\/p>\n<p>\u2713 Cancel at any time<\/p>\n<p>The main driver is likely Google&#8217;s rollout of reasoning models like <a href=\"https:\/\/the-decoder.com\/google-updates-gemini-2-5-flash-models-to-deliver-faster-responses-and-improved-performance\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">Gemini 2.5 Flash<\/a>. These models perform far more internal calculations for every request. Even a <a href=\"https:\/\/the-decoder.com\/microsofts-phi-4-responds-to-a-simple-hi-with-56-thoughts\/\" rel=\"nofollow noopener\" target=\"_blank\">simple greeting like &#8220;Hi&#8221; can trigger dozens of processing steps before a response appears in today&#8217;s reasoning models<\/a>.<\/p>\n<p>A <a href=\"https:\/\/the-decoder.com\/gemini-flash-2-5-becomes-150-times-more-expensive-for-reasoning-tasks-than-flash-2-0\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">recent analysis showed that Gemini Flash 2.5 uses about 17 times more tokens per request<\/a> than its previous version and is up to 150 times pricier for reasoning tasks. Moreover, complex features like video, image, and audio processing are likely factored into the total, but Google doesn&#8217;t break those out.<\/p>\n<p>So, the token number is mostly a measure of backend computing load and infrastructure scaling, not a direct indicator of user activity or actual benefit.<\/p>\n<p>Google&#8217;s token consumption vs. Google&#8217;s environmental claims<\/p>\n<p>Google&#8217;s new token stats also reinforce a key criticism of <a href=\"https:\/\/the-decoder.com\/google-downplays-ais-environmental-impact-in-new-study\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">Google&#8217;s own environmental report<\/a>: by measuring only the smallest unit of computation, the study ignores the true scale of AI operations and downplays the real environmental impact. The report claims a typical Gemini text prompt uses only 0.24 watt-hours of electricity, 0.03 grams of CO\u2082, and 0.26 milliliters of water\u2014supposedly less than nine seconds of TV time.<\/p>\n<p>Those figures assume a &#8220;typical&#8221; short text prompt in the Gemini app. Google doesn&#8217;t say whether these are for lightweight language models (likely) or for the much more resource-hungry reasoning models (unlikely). The study also leaves out heavier use cases like document analysis, image or audio generation, multimodal prompts, or agent-driven web searches.<\/p>\n<p>Recommendation<\/p>\n<p>                                            <a class=\"link-overlay\" href=\"https:\/\/the-decoder.com\/anthropic-launches-claude-3-7-sonnet-hybrid-ai-model-and-claude-code-programming-tool\/\" aria-label=\"Anthropic launches Claude 3.7 Sonnet hybrid AI model and Claude Code programming tool\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>                                                        \t\t\t<a class=\"post-thumbnail\" href=\"https:\/\/the-decoder.com\/anthropic-launches-claude-3-7-sonnet-hybrid-ai-model-and-claude-code-programming-tool\/\" aria-hidden=\"true\" tabindex=\"-1\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" data-lazyloaded=\"1\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/claude-3-7-sonnet-teaser-375x211.jpeg\" loading=\"lazy\" alt=\"Anthropic launches Claude 3.7 Sonnet hybrid AI model and Claude Code programming tool\" width=\"375\" height=\"211\"\/><br \/>\n\t\t\t\t\t\t\t<\/a><\/p>\n<p>                \t\t\t<a class=\"post-thumbnail\" href=\"https:\/\/the-decoder.com\/anthropic-launches-claude-3-7-sonnet-hybrid-ai-model-and-claude-code-programming-tool\/\" aria-hidden=\"true\" tabindex=\"-1\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" data-lazyloaded=\"1\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/claude-3-7-sonnet-teaser-375x211.jpeg\" loading=\"lazy\" alt=\"Anthropic launches Claude 3.7 Sonnet hybrid AI model and Claude Code programming tool\" width=\"375\" height=\"211\"\/><br \/>\n\t\t\t\t\t\t\t<\/a><\/p>\n<p>Viewed in this light, Google&#8217;s 1.3 quadrillion tokens mainly highlight how rapidly its computing demands are accelerating. Yet this surge in system-wide usage doesn&#8217;t appear in Google&#8217;s official environmental assessment. It&#8217;s a bit like an automaker touting low fuel consumption while idling, then calling the entire fleet &#8220;green&#8221; without accounting for real-world driving or manufacturing.<\/p>\n","protected":false},"excerpt":{"rendered":"Summary Google says it now processes more than 1.3 quadrillion tokens every month with its AI models. But&hellip;\n","protected":false},"author":2,"featured_media":191170,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[4323,844,1940,86,56,54,55],"class_list":{"0":"post-191169","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-google","10":"tag-google-ai","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/191169","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=191169"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/191169\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/191170"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=191169"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=191169"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=191169"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}