Broadcom will soon deploy multiple gigawatts worth of custom accelerators at Meta, OpenAI, and Anthropic, a feat it says shows AI companies and hyperscalers can’t successfully develop and deploy their own silicon any time soon.
Speaking on the company’s Q1 2026 earnings call, CEO Hock Tan pointed to 106 percent year-over-year growth for AI-related silicon, which brought in $8.4 billion of revenue for the quarter.
“Our custom accelerator business is progressing very well across five customers,” the CEO said, adding that he expects Google will display “even stronger demand” for Broadcom silicon as it deploys a next-generation TPU. Anthropic will soon implement one gigawatt of Broadcom-baked TPUs, and Tan said the AI company plans a three-gigawatt deployment in 2027. Meta will install “multiple gigawatts” of Broadcom’s XPU accelerators “in 2027 and beyond.”
The CEO said OpenAI will deploy “over one gigawatt of compute capacity” based on custom XPUs in 2027.
Tan said Broadcom has already secured the supplies it needs to make all that kit – including high-bandwidth memory – and satisfy demand until 2028. And he predicted that Broadcom will win similar deals for years to come, because hyperscalers and AI upstarts can’t match its ability to design and deliver custom silicon.
“They face tremendous challenges,” he said, in terms of attracting silicon design talent capable of creating chips tuned to particular workloads, managing the production process, developing packaging expertise, and then networking their chips.
Tan said homebrew chipmaking efforts must create chips that are competitive with not just NVIDIA, but “all the other LLM platform players that you are competing against.” He can’t see that happening at any hyperscaler or AI company “for many years to come.”
“Anybody can design a chip in a lab that works well,” he said. “Can you produce 100,000 of those chips quickly, at yields that you can afford? And we do not see too many players in the world that can do that.”
Broadcom’s AI-relevant networking business is also booming, having improved revenue 60 percent year-over-year. Tan said the company will debut a seventh-generation Tomahawk switching chip next year that will double the current model’s performance, and will do the same for its direct copper interconnects meaning customers won’t need to contemplate a move to optical networking.
Tan therefore predicted Broadcom has “line of sight” to win $100 billion or more in revenue from AI chips alone in 2027.
Broadcom’s overall semiconductor business recorded $12.5 billion revenue for the quarter, up 53 percent year-over-year. Revenue for non-AI chips was steady at $4.1 billion.
Software propped up by VMware
Broadcom’s software infrastructure business – the combined CA, Symantec Enterprise, and VMware – delivered one percent revenue growth to land at $6.8 billion. VMware revenue grew 13 percent, suggesting customers weren’t wild about CA and Symantec.
Tan was bullish about VMware’s prospects, saying its flagship Cloud Foundation private cloud suite is an “essential layer” of infrastructure for enterprise AI deployments.
“VCF cannot be disintermediated or replaced,” he said. “AI will create the need for more VMware, not less.”
And as if to prove it, Broadcom forecast Q2 software revenue will reach $7.2 billion, representing nine percent growth. The company predicted overall Q2 revenue will hit $22 billion, up 47 percent year-over-year.
Investors appear to have liked those numbers, and the announcement of a new share buyback scheme, as Broadcom stock jumped almost five percent in after-hours trading. ®