(Bloomberg) — In an AI chip industry that’s almost entirely commanded by Nvidia Corp., a Google chip first developed more than 10 years ago especially for artificial intelligence tasks is finally gaining momentum outside its home company as a way to train and run complex AI models.

Anthropic PBC on Thursday unveiled a deal with Alphabet Inc.’s Google to supply the artificial intelligence startup with more than a gigawatt of additional computing power, valued in the tens of billions of dollars. The agreement gives Anthropic access to as many as 1 million of Google’s tensor processing units, or TPUs — the company’s chips that are custom designed to accelerate machine learning workloads — and expands its use of the internet giant’s cloud services.

Most Read from Bloomberg

As AI industry contenders clamber to keep up with runaway demand, they’ve been looking for ways to boost their computing power that don’t hinge on access to Nvidia’s accelerator chips — both to temper dependence on the chip giant’s expensive products and to mitigate the impact of shortages. While Anthropic is already a TPU customer, the dramatically increased deployment is one of the strongest endorsements yet of Google’s technology, and represents a win for its cloud business, which has long lagged behind Amazon.com Inc. and Microsoft Corp.

A surge of interest in TPUs is likely to direct the attention of other AI startups and new customers toward Google’s cloud, helping the company leverage its years of investment in the chip.

Google’s cloud business reported operating income of $2.8 billion in the second quarter, more than double the amount from the same quarter last year. Shares of Alphabet rose slightly in premarket trading on Friday.

Google’s deal with Anthropic is a “really powerful validation of TPUs,” which could get more companies to try them, said Seaport analyst Jay Goldberg. “A lot of people were already thinking about it, and a lot more people are probably thinking about it now.”

Graphics processing units, or GPUs, the part of the chip market dominated by Nvidia, were created to speed the rendering of graphics — mainly in video games and other visual-effects applications — but turned out to be well-suited to training AI models because they can handle large amounts of data and computations. TPUs, on the other hand, are a type of specialized product known as application specific integrated circuits, or microchips that were designed for a discrete purpose.

Google began working on its first TPU in 2013 and released it two years later. Initially, it was used to speed up the company’s web search engine and boost efficiency. Google first began putting TPUs in its cloud platform in 2018, allowing customers to sign up for computing services running on the same technology that had boosted the search engine.

It was also adapted as an accelerator for AI and machine learning tasks in Google’s own applications. Because Google and its DeepMind unit develop cutting-edge AI models like Gemini, the company has been able to take lessons from the AI teams back to the chip designers, while the ability to customize the chips has benefited the AI teams.

“When we built our first TPU-based system a little bit over 10 years ago now, it was really about solving some internal scaling challenges we had,” said Mark Lohmeyer, Google Cloud vice president and general manager of AI and computing infrastructure, in a conference speech in September. “Then when we put that compute power into the hands of our researchers in Google DeepMind and others, that in many ways directly enabled the invention of the transformer,” he said, referring to the pioneering Google-proposed AI architecture that has become the foundation for today’s models.

Nvidia’s chips have become the gold standard in the AI market because the company has been making GPUs for far longer than anyone else, plus they are powerful, frequently updated, offer a full suite of related software, and are general-purpose enough to work for a wide array of tasks. Yet, owing to skyrocketing demand, they are also pricey and, for the past few years, chronically in short supply.

TPUs, meanwhile, can often perform better for AI workloads because they are custom designed for that purpose, said Seaport’s Goldberg, who has a rare sell rating on Nvidia shares. That means the company can “strip out a lot of other parts of the chip” that aren’t tailored to AI, he said. Now in its seventh generation of the product, Google has improved performance of the chips, made them more powerful and lowered the energy required to use them, which makes them less expensive to run.

Current TPU customers include Safe Superintelligence — the startup founded last year by OpenAI co-founder Ilya Sutskever, as well as Salesforce Inc. and Midjourney, alongside Anthropic.

For now, businesses that want to use Google TPUs have to sign up to rent computing power in Google’s cloud. But that may soon change — the Anthropic deal makes an expansion into other clouds more likely, said Bloomberg Intelligence analysts.

“Google’s potential deal with Anthropic suggests more commercialization of the former’s tensor processing units beyond Google Cloud to other neo-clouds,” BI’s Mandeep Singh and Robert Biggar wrote in a note Wednesday, referring to smaller companies offering computing power for AI.

To be sure, no one — including Google — is currently looking to replace Nvidia GPUs entirely; the pace of AI development means that isn’t possible right now. Google is still one of Nvidia’s biggest customers despite having its own chips because it has to maintain flexibility for customers, said Gaurav Gupta, an analyst at Gartner. If a customer’s algorithm or model changes, GPUs are better suited to handle a wider range of workloads.

Key Banc analyst Justin Patterson agrees, saying tensor processing units are “less versatile” than the more general-purpose GPUs. But the Anthropic deal demonstrates both that Google Cloud is gaining share and that TPUs are “strategically important,” Patterson wrote in a note to clients.

The latest version of Google’s TPU, called Ironwood, was unveiled in April. It’s liquid-cooled and designed for running AI inference workloads — meaning using the AI models rather than training them. It’s available in two configurations — a pod of 256 chips or an even larger one with 9,216 chips.

Veterans of the TPU work at Google are now leading chip startups or key projects at other large AI companies. Inference-chip startup Groq is helmed by Jonathan Ross, who began some of the work that became TPU. Other people who worked on Google’s TPU include Richard Ho, vice president of hardware at ChatGPT developer OpenAI, and Safeen Huda, who joined OpenAI to work on hardware and software codesign, according to his LinkedIn.

By helping TPUs proliferate as AI workhorses, these former Googlers continue to spread the internet company’s influence across the AI industry. Those at Google tout the years of work as a key driver of the success of their product.

“There really is no substitute for this level of experience,” Google’s Lohmeyer said in September.

(Updates with Google cloud figures and premarket shares in the fifth paragraph)

Most Read from Bloomberg Businessweek

©2025 Bloomberg L.P.