{"id":411863,"date":"2026-04-22T14:23:08","date_gmt":"2026-04-22T14:23:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/411863\/"},"modified":"2026-04-22T14:23:08","modified_gmt":"2026-04-22T14:23:08","slug":"google-launches-training-and-inference-tpus-in-latest-shot-at-nvidia","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/411863\/","title":{"rendered":"Google launches training and inference TPUs in latest shot at Nvidia"},"content":{"rendered":"<p><img decoding=\"async\" class=\"InlineVideo-videoThumbnail\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/108295301-Google_unveils_new_AI_chips_Heres_what_to_know.jpg\" alt=\"Google unveils new TPUs to take on Nvidia in AI chips\"\/><\/p>\n<p>After years of producing chips that can both train artificial intelligence models and handle inference work, <a href=\"https:\/\/www.cnbc.com\/quotes\/GOOGL\/\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> is separating those tasks into distinct processors, its latest effort to take on <a href=\"https:\/\/www.cnbc.com\/quotes\/NVDA\/\" rel=\"nofollow noopener\" target=\"_blank\">Nvidia<\/a> in AI hardware.<\/p>\n<p>Google said Wednesday that it&#8217;s making the change for the eighth generation of its tensor processing unit, or TPU. Both chips will become available later this year. <\/p>\n<p>&#8220;With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving,&#8221; Amin Vahdat, a Google senior vice president and chief technologist for AI and infrastructure, said in a blog post. <\/p>\n<p>In March, Nvidia talked up forthcoming silicon that can enable models to rapidly respond to users&#8217; questions, thanks to technology obtained in its <a href=\"https:\/\/www.cnbc.com\/2025\/12\/24\/nvidia-buying-ai-chip-startup-groq-for-about-20-billion-biggest-deal.html\" rel=\"nofollow noopener\" target=\"_blank\">$20 billion deal<\/a> with chip startup Groq. Google is a large Nvidia customer, but offers TPUs as an alternative for companies that use its cloud services. <\/p>\n<p>Most of the world&#8217;s top technology companies are pursuing custom semiconductor development for artificial intelligence to maximize efficiency and so they can build for specialized use cases. <a href=\"https:\/\/www.cnbc.com\/quotes\/AAPL\/\" rel=\"nofollow noopener\" target=\"_blank\">Apple<\/a> has included neural engine AI components in its in-house iPhone chips for years. <a href=\"https:\/\/www.cnbc.com\/quotes\/MSFT\/\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft<\/a> announced a <a href=\"https:\/\/www.cnbc.com\/2026\/01\/26\/microsoft-reveals-maia-200-ai-chip-will-use-it-in-house.html\" rel=\"nofollow noopener\" target=\"_blank\">second-generation AI chip<\/a> in January. Last week, <a href=\"https:\/\/www.cnbc.com\/quotes\/META\/\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a> said it&#8217;s working with <a href=\"https:\/\/www.cnbc.com\/quotes\/AVGO\/\" rel=\"nofollow noopener\" target=\"_blank\">Broadcom<\/a> to develop <a href=\"https:\/\/www.cnbc.com\/2026\/04\/14\/meta-commits-to-one-gigawatt-of-custom-chips-with-broadcom-as-hock-tan-agrees-to-leave-board.html\" rel=\"nofollow noopener\" target=\"_blank\">multiple versions<\/a> of AI processors.<\/p>\n<p>Google was early to the trend. In 2015, the company started using processors it had designed for running AI models, and began renting them to cloud clients in 2018. <a href=\"https:\/\/www.cnbc.com\/quotes\/AMZN\/\" rel=\"nofollow noopener\" target=\"_blank\">Amazon<\/a> Web Services announced the Inferentia chip for handling AI requests <a href=\"https:\/\/www.cnbc.com\/2018\/11\/28\/aws-launches-inferentia-ai-chip.html\" rel=\"nofollow noopener\" target=\"_blank\">in 2018<\/a>, and unveiled the <a href=\"https:\/\/www.cnbc.com\/2023\/11\/28\/amazon-reveals-trainium2-ai-chip-while-deepening-nvidia-relationship.html\" rel=\"nofollow noopener\" target=\"_blank\">Trainium processor<\/a> for training AI models in 2020. <\/p>\n<p>DA Davidson analysts estimated in September that the TPU business, coupled with the Google DeepMind AI group, would be worth about <a href=\"https:\/\/www.cnbc.com\/2025\/11\/07\/googles-decade-long-bet-on-tpus-companys-secret-weapon-in-ai-race.html\" rel=\"nofollow noopener\" target=\"_blank\">$900 billion<\/a>.<\/p>\n<p>None of the tech giants are displacing Nvidia, and Google isn&#8217;t even comparing the performance of its new chips with those from the AI chip leader. Google did say the training chip enables 2.8 times the performance of the seventh-generation Ironwood TPU, <a href=\"https:\/\/www.cnbc.com\/2025\/11\/06\/google-unveils-ironwood-seventh-generation-tpu-competing-with-nvidia.html\" rel=\"nofollow noopener\" target=\"_blank\">announced in November<\/a>, for the same price, while performance is 80% better for the inference processor.<\/p>\n<p>Nvidia said its upcoming <a href=\"https:\/\/www.cnbc.com\/2026\/03\/16\/nvidia-gtc-2026-ceo-jensen-huang-keynote-blackwell-vera-rubin.html\" rel=\"nofollow noopener\" target=\"_blank\">Groq 3 LPU<\/a> hardware will draw on large quantities of static random-access memory, or SRAM, which is used by Cerebras, an AI chipmaker that <a href=\"https:\/\/www.cnbc.com\/2026\/04\/17\/cerebras-new-ipo-ai-chips.html\" rel=\"nofollow noopener\" target=\"_blank\">filed<\/a> to go public earlier this month. Google&#8217;s new inference chip, dubbed TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, triple the amount in Ironwood.<\/p>\n<p>The architecture is designed &#8220;to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively,&#8221; <a href=\"https:\/\/www.cnbc.com\/2014\/10\/06\/sundar-pichai.html\" rel=\"nofollow noopener\" target=\"_blank\">Sundar Pichai<\/a>, CEO of Google parent Alphabet, wrote in a blog post.<\/p>\n<p>Adoption of Google&#8217;s AI chips is ramping up. Citadel Securities built quantitative research software that draws on Google&#8217;s TPUs, and all 17 U.S. Energy Department national laboratories use AI co-scientist software built on the chips, Google said. Anthropic has committed to using <a href=\"https:\/\/www.cnbc.com\/2026\/04\/06\/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html\" rel=\"nofollow noopener\" target=\"_blank\">multiple gigawatts<\/a> worth of Google TPUs.<\/p>\n<p>WATCH: <a href=\"https:\/\/www.cnbc.com\/video\/2026\/04\/07\/googles-nvidia-chip-alternative.html\" rel=\"nofollow noopener\" target=\"_blank\">Broadcom agrees to expanded chip deal with Google, Anthropic<\/a><\/p>\n<p><img decoding=\"async\" class=\"InlineVideo-videoThumbnail\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/108288040-17755775861775577582-45098061648-1080pnbcnews.jpg\" alt=\"Broadcom agrees to expanded chip deal with Google, Anthropic\"\/><a href=\"https:\/\/www.google.com\/preferences\/source?q=https:\/\/www.cnbc.com\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"After years of producing chips that can both train artificial intelligence models and handle inference work, Google is&hellip;\n","protected":false},"author":2,"featured_media":411864,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,3490,1671,1646,218,219,726,1676,72,122,38273,61,206,60,138658,1677,1672,1673,144052,80],"class_list":{"0":"post-411863","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-alphabet-class-a","10":"tag-amazon-com-inc","11":"tag-apple-inc","12":"tag-artificial-intelligence","13":"tag-artificialintelligence","14":"tag-breaking-news-technology","15":"tag-broadcom-inc","16":"tag-business","17":"tag-business-news","18":"tag-enterprise","19":"tag-ie","20":"tag-internet","21":"tag-ireland","22":"tag-ishares-semiconductor-etf","23":"tag-meta-platforms-inc","24":"tag-microsoft-corp","25":"tag-nvidia-corp","26":"tag-spdr-sp-semiconductors","27":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/411863","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=411863"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/411863\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/411864"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=411863"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=411863"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=411863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}