“Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again.”That was the tweet posted Monday by Marc Benioff, founder and CEO of software giant Salesforce, to his more than one million followers on X (formerly Twitter). Benioff frequently praises the AI revolution on his account, even as its rise poses a potential threat to his own company.
1 View gallery


Nvidia CEO Jensen Huang (left) Google CEO Sundar Pichai
(Photo: Damian Lemanski/Bloomberg, Ezra Acayan/Getty Images)
Benioff’s tweet was just one of many praising Google’s latest release of Gemini. The tech giant’s stock jumped 6% on Monday, and the rally continued into Tuesday, bringing Google’s market cap up 70% from where it stood at the beginning of 2025. The surge nearly pushed Google into the $4 trillion club, territory previously reached only by Nvidia, which dropped 5% at the start of trading, Microsoft, and Apple. And it looks like Google is there to stay.
But the surge goes deeper than just a successful chatbot launch. Unlike its competitors, Google trained and now runs Gemini on its own chips, not Nvidia’s. Google’s chip is called a TPU (Tensor Processing Unit), which operates differently from Nvidia’s market-dominant GPU (Graphics Processing Unit).
Until recently, Gemini had its share of missteps, from overly politically correct responses to bizarre image generations, like depicting a Black woman as a historical pope or Nazi soldiers of various ethnicities. Google eventually suspended Gemini’s image generation of people, and CEO Sundar Pichai publicly apologized.
Now, users are reporting that Gemini is significantly more accurate and nuanced than OpenAI’s ChatGPT, especially at handling complex tasks.
Still, the real story isn’t Google. What sent Wall Street soaring Monday, and has dominated tech AI bubble chatter since, is the prospect of a crack in Nvidia’s near-monopoly.
Nvidia isn’t going anywhere; it remains the world’s most valuable company, or at worst, one of them. But Google’s surprisingly strong results suggest there may finally be a path around Nvidia, or as it’s called in the chip industry, a way to reduce the “Nvidia tax.”
That “tax” refers to the high cost and tight supply of Nvidia chips, which nearly every company that wants to develop or use AI must contend with, in other words, nearly every company in the world.
Last week, Nvidia CEO Jensen Huang insisted that chip allocation is based solely on operational readiness, not favoritism. However, reports have surfaced that tech leaders like Oracle’s Larry Ellison and Elon Musk have “asked” for priority access. According to Huang, Nvidia decides who gets GPUs first based on immediate usability, not stockpiling potential.
The expectation, and perhaps the mild sigh of relief from Wall Street investors, who have been nervously swinging between peaks of optimism and pessimism about AI’s future, stems largely from the notion that even a hint of competition for Nvidia could ease the massive commitments tech giants like Meta, Amazon and Microsoft are making to keep up with the AI revolution.
Adding fuel to that hope was a U.S. report this week that Mark Zuckerberg’s Meta is considering switching to Google’s processors in its data centers starting in 2027. That development made the dream of an alternative supply chain to Nvidia feel more tangible, and sent anyone who had just mastered what GPU stands for scrambling to ChatGPT or Gemini to ask, “What the hell is a TPU?”
Google launched the first generation of its TPU back in 2018, initially for internal use in its cloud infrastructure to handle storage, search, and, naturally, its search engines.
As AI technologies advanced, Google continued investing in the chip, recognizing its natural evolution toward training large language models (LLMs). That evolution is now yielding advantages over Nvidia’s GPUs, which were originally developed to support high-end graphics for video games rather than advanced mathematical processing.
While GPUs turned out to be well-suited for AI and thus widely adopted, TPUs were built specifically for AI tasks.
As a result, Google’s processor is more efficient, delivering higher throughput (measured in operations per second) while consuming less energy.
In addition to the current shortage of Nvidia chips, another major bottleneck is energy infrastructure. The demand is so high that the U.S. has begun building nuclear power plants specifically to support the data centers powering AI applications. In that context, the energy efficiency of Google’s TPU becomes even more critical, especially as these chips will increasingly be used not just to train AI models, but to power inference models too.
Fears over losing its unchallenged dominance in AI chips caused Nvidia’s stock to drop 5% Tuesday before Wall Street even opened, with further declines during trading.
The dip pushed Nvidia’s market cap down to $4.1 trillion, no longer far ahead of Google. After Nvidia’s fall and Google’s rise, the companies’ forward price-to-earnings ratios have also drawn closer, making Nvidia appear significantly cheaper than it was just a week ago, before it released its blockbuster earnings report.
Despite the buzz, Google’s own internal demand remains high, and it’s unclear whether the company will rush to make its chips widely available. The competitive edge TPUs provide by reducing Google’s reliance on Nvidia, could be just as crucial in the cloud-computing battle, where Google still lags behind Amazon and Microsoft.
There’s also a domino effect playing out among the less flashy companies operating behind the scenes in chip design and manufacturing. While attention focused on Google’s stock surge and Nvidia’s decline, other stocks quietly made interesting moves. Most notably, Broadcom jumped 11% on Monday, pushing its valuation close to $2 trillion.
Broadcom is Google’s chip-design partner, responsible for physically engineering the TPU. If Meta does move forward with plans to adopt TPU chips, it would also need to pay Broadcom. The U.S.-based chipmaker also designs custom chips for other companies but maintains a high level of confidentiality, as demanded by its clients.
Another name now entering the spotlight is Marvell, a much smaller player also competing in the custom-chip arena like Google’s TPU. Similar to Mellanox, which was acquired by Nvidia and is now a key part of its AI solutions, Marvell also provides high-speed optical connectivity solutions for data centers.
Finally, Google’s deeper move into chip development could benefit the manufacturers themselves, who are indifferent to whether they produce chips for Google or Nvidia. For them, it’s simply more business. These manufacturers are primarily based in East Asia, led by TSMC, Samsung, and Micron.