There are a lot of things one could point to as proof that a bubble is percolating around artificial intelligence. One of those things is a company called Fermi Inc. FRMI-Q
Founded this year, Fermi aims to roll out 11 gigawatts of electricity to power AI data centres, which is roughly enough electrons to light up Alberta. Fermi holds a 99-year lease on a swath of Texas land able to simultaneously house some of the largest data centres “currently in existence,” according to the company. Fermi went public on the Nasdaq in October and soon hit a valuation of more than US$17-billion. Fermi has zero profit, zero revenue and a single letter of intent from a customer.
The logic behind the enthusiasm is the same logic driving most AI investment today. First, it is believed that generative AI will usher in a new economic age. To build more powerful AI models and to support widespread usage, the world needs data centres stocked with computer chips, usually graphics processing units (GPUs), designed to handle these intense processing needs.
A power plant substation and mechanical cooling room are under construction at Meta’s data centre in Eagle Mountain, Utah. AI data centres can require massive amounts of electricity to operate, so new supply is needed to meet the high demand.
Christie Hemm Klok/The New York Times Service
AI data centres can require gargantuan amounts of electricity, which the world cannot easily supply, so new capacity is needed. Fermi can deliver this electricity, someday, provided it can secure customers, raise huge amounts of money, deploy infrastructure that includes nuclear reactors, hit its timelines, navigate a tumultuous AI landscape and more.
Fermi may well pull it off, but its very existence shows we are at a point when AI dreams and a land lease are worth billions.
“Bubble” is an imprecise term, which is why people on all sides can marshal evidence to build their case. Yet warnings that the hype and spending on AI, a technology whose financial returns are deeply uncertain, amounts to a speculative frenzy have become so commonplace as to be mundane. Some of the biggest names in tech, including Sam Altman, Mark Zuckerberg, Jeff Bezos and Sundar Pichai, have acknowledged the likelihood of over-investment, even if they are huge proponents of the long-term value of AI.
This is not the U.S. financial crisis of 2008, in which the mechanisms of the collapse were so opaque that few people saw it coming. Instead, we have disclosures from publicly traded companies such as Microsoft Corp. MSFT-Q, Google, Amazon.com Inc. AMZN-Q and Meta Platforms Inc. META-Q, which are all helping push data centre spending to US$580-billion this year, more than global oil investment. Loosely adjusted for inflation, that’s also more than the cost of the Marshall Plan to rebuild Europe after the Second World War and the Apollo program to put astronauts on the moon, which spanned years.
The Amazon Web Services data centre, known as US East 1, in Ashburn, Virg., in October, 2025. Amazon is among the major publicly traded tech companies pushing data centre spending to US$580-billion this year.Jonathan Ernst/Reuters
Even if you are not directly invested in AI, you are exposed to it. In October, the top 30 AI-related stocks made up 44 per cent of the S&P 500 Index, according to JPMorgan Chase & Co. For investment managers who lived through the dot-com era, when optimism about the dawn of the internet ran ahead of reality, the flashbacks are hard to stave off.
Kim Shannon, founder of Sionna Investment Managers in Toronto, can’t help but see parallels. “Aggressive accounting practices surfacing out of private credit spaces, extreme high multiples and retail speculative behaviour are all similar to that period,” she says.
While history may not always repeat, it is full of booms and busts. Railways, electricity and the internet all changed the world, but not without financial carnage along the way. Why would AI be different?
John Ruffolo, founder of Maverix Private Equity in Toronto. Mr. Ruffolo, who survived the dot-com bust, is skeptical of sky-high valuations for AI companies that “make no mathematical sense.”Melissa Tait/The Globe and Mail
John Ruffolo has had concerns about AI infrastructure spending for at least a year. He believes AI will pay off in the long run, but in the meantime, there is clearly an investment bubble. “It is so overhyped,” he says. “There’s so much FOMO going on.”
Now the founder of Maverix Private Equity in Toronto, Mr. Ruffolo is a survivor of the dot-com bust and once worked as an advisor to Nortel Networks Corp., a casualty of the era. His suspicions have been raised watching the valuations for AI companies hit levels that “make no mathematical sense,” plans for impractically large data centres, the lack of clarity around the true cost to use AI applications and the huge revenues needed to justify everything. “What’s put me way over the top is all of this circular funding,” he says.
He’s referring to a number of deals involving the biggest names in AI. To take one example, Nvidia Corp. NVDA-Q, the dominant provider of GPUs, struck an arrangement with OpenAI in September to invest up US$100-billion in increments as the ChatGPT-maker deploys 10 gigawatts of data-centre capacity.
Nvidia chief executive officer Jensen Huang has said the arrangement allows the two companies to work even more closely on a technical level and called OpenAI a “once-in-a-generation” company that will produce “extraordinary returns.” (He has also shot down the notion of a bubble.) OpenAI is also a user of Nvidia’s GPUs, of course, so it appears as though the supplier is subsidizing the customer. “Is this irrational exuberance? That’s what I think it is,” Mr. Ruffolo says.
Nvidia has been funding the AI industry for years. In a recent report, Toronto’s Veritas Investment Research identified 80 deals since 2023, totalling US$49-billion, in which Nvidia backed a customer. These deals have left Veritas analyst Benjamin Butler scratching his head. “It doesn’t make sense for them to be spending this much time pursuing the breadth of these equity investments,” he says.
Putting its sizable balance sheet to work isn’t irrational on its own, he continues, but for a company of Nvidia’s size, the more important goal is revenue growth, not a score from a speculative startup bet. That’s led Mr. Butler to conclude that Nvidia is not merely striking venture capital deals but, rather, providing vendor financing to bridge losses and sustain GPU demand from cash-strapped customers. “The result is a self-reinforcing demand cycle driven more by supplier funding than end-user adoption,” he wrote in his report. (Nvidia has said it doesn’t require portfolio companies to use its technology, though there aren’t many other options.)
Nvidia’s ties to the AI ecosystem raise questions about the quality of its earnings, especially if customers hit trouble. Nvidia faces a double risk if OpenAI falters or struggles to raise capital: The value of its investment will drop, and so will demand for chips. Nvidia struck a similar deal with OpenAI competitor Anthropic this month, investing up to US$15-billion along with Microsoft. Anthropic, in turn, will spend US$30-billion on compute services from Microsoft, running on Nvidia systems.
Nvidia is also the fifth-largest shareholder in CoreWeave Inc. CRWV-Q, a former cryptocurrency miner that now provides “compute” (the industry’s term for access to computer chips) to build and run AI models. CoreWeave, which is losing money and is heavily indebted, builds and operates data centres and purchases GPUs from Nvidia.
Canada’s Minister of AI and Digital Innovation Evan Solomon, left, shakes hands with Aidan Gomez of Cohere, at the All In AI conference in Montreal in September. Nvidia’s has supported Canada’s AI ecosystem by backing Canadian AI champion Cohere Inc. in more than one funding round.Christopher Katsarov/The Canadian Press
In September, Nvidia signed an agreement to purchase unsold compute capacity from CoreWeave through 2032. That limits risk for CoreWeave, and allows it to purchase GPUs from Nvidia more aggressively. “Since they buy more GPUs, they’re going to have more cloud capacity, and they know they can sell it back to Nvidia in the worst case,” Mr. Butler says. “The structure is an incentive to inflate revenue at both Nvidia and CoreWeave, and raises the question of economic substance.”
To put it another way, if I’m running a bicycle rental shop and the kind folks who sell me the bicycles guarantee they will pay to ride around on them if I can’t find customers, I might just buy a lot more bicycles.
A CoreWeave spokesperson said the contract with Nvidia is structured on commercial terms and involves no special treatment. The deal also allows CoreWeave to sell short-duration capacity to smaller clients that aren’t in a position to ink the long-dated agreements the company typically signs, the spokesperson said.
Nvidia’s ample cash has supported Canada’s AI ecosystem, too. Notably, it has backed Canadian AI champion Cohere Inc. in more than one funding round. Funnily enough, Cohere is a CoreWeave customer. In effect, Nvidia-backed CoreWeave buys chips from Nvidia, and Nvidia-backed Cohere purchases compute from Nvidia-backed CoreWeave, which, again, has purchased chips from Nvidia – and, well, apologies if you’re dizzy.
CEO and founder of CoreWeave Mike Intrator rings the opening bell surrounded by company executives and their families during the company’s IPO at Nasdaq headquarters on March 28, 2025. Nvidia is the fifth-largest shareholder in CoreWeave, a former cryptocurrency miner that now provides access to computer chips to build and run AI models.Michael M. Santiago/Getty Images
Customer concentration in the industry is high, creating interdependencies. CoreWeave disclosed that in the first nine months of the year, one customer made up 70 per cent of its revenue. About 58 per cent of Oracle Corp.’s ORCL-N cloud revenue backlog, meanwhile, is tied to OpenAI, according to a report from Jefferies Financial Group.
These kinds of arrangements worsened the pain during the dot-com crash. In the late 1990s, the internet was dawning, and in order to get everyone online, the world needed more fibre-optic cable and telecom infrastructure. Equipment makers such as Nortel provided financing to cash-strapped carriers to purchase their wares; in one example, Nortel struck a $1-billion purchase agreement with a customer and agreed to provide up to half of the financing. The practice allowed Nortel to record revenue, while customers got the funds to build the future and, maybe, eventually, see a profit. The strategy worked until it didn’t. The industry overbuilt fibre-optic cable ahead of real internet demand, revenue didn’t materialize and the highly indebted carriers relying on vendor financing couldn’t pay their bills, triggering bankruptcies and writedowns.
But for some analysts and investors, there’s something too deliberate about trying to paint Nvidia as Nortel, or data centres as fibre-optic cable. Indeed, there are crucial differences between now and then. “I’m a tiny bit lost as to how it really compares,” says Chris Stuchberry, senior portfolio manager at Wellington-Altus Private Wealth in Toronto. Back then, companies that were burning through cash were anticipating internet demand that did not materialize until many years later. “Right now, there’s too much demand for compute and data centres,” he says.
Companies such as Meta, Google and Amazon would rather overbuild than risk falling behind a competitor given the promise of AI, and demand for compute is high. “As fast as we can build infrastructure, it’s getting consumed,” says Kevin Deierling, senior vice-president of networking at Nvidia. “There’s massive revenues being generated, and there’s nothing available in the channel.” Francois Chadwick, Cohere’s chief financial officer, suggests the appetite for compute is voracious. “We can consume more and more and more,” he says.
The hugely profitable U.S. tech companies, referred to as hyperscalers, are among the players buying the most chips, too, and they’re using their own ample cash to do so.
The dot-com era was further marred by publicly traded tech startups that lacked business plans, while today’s AI software startups tend to be privately held. “Dot-com companies had no fundamentals. They could not produce anything,” says Nikola Gradojevic, a professor and Fidelity Chair in finance at the University of Guelph. “With AI, we see returns. Not much, but we see returns.”
Still, for some observers, the parallels are too stark and a sign that the foundations of the AI buildout are getting shaky. “This time, it’s different. That’s the famous line,” Mr. Ruffolo says. “All I say is, no, it’s not different. It’s shockingly the same.”
U.S. President Donald Trump presents a representation of Meta’s planned Hyperion Data Centre, shown to scale over Manhattan. In October, Meta struck a deal with an investment firm called Blue Owl Capital to raise US$27.3-billion in debt to fund the massive data centre campus.Jonathan Ernst/Reuters
The notion that the data-centre buildout is financed purely with cash isn’t true. In October, Meta struck a deal with an investment firm called Blue Owl Capital to raise US$27.3-billion in debt to fund a data-centre campus nearly as large as Manhattan. The debt won’t show up on Meta’s balance sheet, because it’s not technically held by Meta. Instead, it will reside with a new company, Beignet Investor LLC, in which Meta owns a stake. “These companies realize that this is a fairly risky endeavour and don’t want to be on the hook for it if things go sour,” says Peter Berezin, chief global strategist at BCA Research.
More debt financing appears inevitable. Analysts at JPMorgan wrote this month that because the scale of infrastructure required for AI is so massive – some US$5-trillion over the next few years, by one estimate, including related power supplies – all forms of financing will be required. That includes investment-grade debt, junk bonds, asset-backed securities and more off-balance-sheet agreements à la Meta. Tech companies tied to AI already make up 14.5 per cent of the high-grade debt market, including billions of dollars issued in bonds this year by Google, Meta and Oracle, the latter of which is now close to US$96-billion in debt.
Tech companies already account for 7 per cent of the high-yield market and that could “easily double” in the next five years, JPMorgan analysts wrote. (The telecom sector climbed to 21 per cent before the bubble burst, for context.)
One might say: So what? “What goes awry is the way everything goes awry when we financialize the sale of an asset,” says Paul Kedrosky, California-based partner at investing firm SK Ventures. “Eventually you end up creating an incentive to create far too much of the thing.”
The financial wizards conjuring these arrangements care about slicing up the debt into tranches to sell to investors, who only care about the yield. But demand for these credit products risks becoming detached from economic reality, he says, which happened with the mortgage-backed securities at the heart of the 2008 U.S. financial crisis. Risk spills over from the equity market into the credit market, and the whole system can become more complex and opaque.
Not every AI company is as creditworthy as Google or Microsoft, either. Mr. Kedrosky is deeply skeptical of smaller AI cloud companies. Buying GPUs and building data centres is expensive, and these companies have to borrow money to do so. CoreWeave is carrying about US$14-billion in debt and its interest expense the first nine months of the year totalled US$841-million, surging nearly 300 per cent compared to the year before. Some of the company’s debt is saddled with interest rates as high as 15 per cent.
To meet payment obligations, these companies have to expand their customer bases and allot space in data centres to “low-quality” tenants, meaning venture-backed startups that need compute, Mr. Kedrosky contends. Startups, of course, are wont to fail from time to time. “The structure itself becomes incredibly risky, because as the debt becomes more expensive, it means the tenancy must become more speculative,” he says.
GPUs don’t have a long lifespan, either. In financial filings, hyperscalers put it at four to six years, in part because new chip models are released. As a result, cloud providers are obligated to upgrade their hardware, possibly at great expense, and it’s not yet clear how the quick depreciation of chips will affect revenue and profit margins. “You’re going to need to reinvest to keep this game going,” says Anthony Scilipoti, president at Veritas. “How often and at what cost is still a mystery.”
The price to access GPUs from cloud providers shows how quickly value can erode. The hourly rate to run workloads on Nvidia H100 GPUs has fallen about 30 per cent since September, 2024, according to Bloomberg data. A report from RBC Capital Markets found the average price to access H200 GPUs, a newer chip, fell about 29 per cent between December, 2024, and August, 2025.
While hyperscalers charge higher prices, these trends could create problems for smaller cloud companies that have popped up to offer AI compute. (A recent report from Seaport Research Partners identified at least 68 of them.) “There’s a bit of a gold rush mentality,” says Adam Hendin, CEO of cloud company Radium in Toronto. “Smaller data-centre operators are literally having their doors knocked on and being offered financing.” Not all of these players are experienced, and the double whammy of debt financing and dropping rental prices could lead to consolidation, he says, though he does not have concerns about the broader ecosystem.
Indeed, a few no-name cloud providers hitting trouble might not create many ripples. But there’s something else that could.
None of this – not the financing deals, the debt, the revenue gap – matters much if AI pays off. Despite a number of studies purporting to show the benefits, or lack thereof, for companies that adopt generative AI, we still don’t have a clear picture. “People are trying to weigh in very quickly with very poorly designed studies,” says Kristina McElheran, an associate professor in strategic management at the University of Toronto Scarborough.
Prof. McElheran, however, has co-authored an in-depth and rigorous study that sheds a lot of light. Using U.S. census data, she and her co-authors studied manufacturing firms that adopted AI – not the generative variety, but systems that could predict when a machine needs maintenance, to take one example. At first, companies experienced a loss of productivity and an erosion in profits before seeing longer-term benefits. “This is not surprising to anyone who has studied the history of technological change,” says Prof. McElheran. Technology changes rapidly but organizations do not. We all need time to figure it out. “What we see in the data is that it is really risky and costly, and it doesn’t pay off right away,” she says.
Here, we can see one way AI euphoria could fade: Investors wait longer for gains to be realized than they hope, they grow antsy and stampede for the exit. Peter Berezin at BCA refers to this as the “metaverse” moment, a reference to Meta’s big bet on virtual worlds a few years ago.
A participant joins a live news conference in the metaverse ahead of the CES tech show in Las Vegas in January, 2023. Meta’s big bet on virtual worlds a few years ago sent the company’s stock plummeting when investors soured on the metaverse.John Locher/The Associated Press
Investors eventually soured on the whole thing, sending the social media company’s stock plummeting. Something similar could happen if a tech giant announces another large AI project, and investors respond by selling their shares. “I don’t know if it’s imminent, but we’re getting close to it,” says Mr. Berezin. One sign is that the hyperscalers’ free cash flow, while still strong, has been eroding as they spend big on AI, increasing the likelihood investors will demand they curtail spending.
He also raises the possibility that AI proves to be remarkably similar to the airline industry. Commercial aviation is no doubt important to the world, but operating an airline is tough: Planes are expensive to buy and lose value; there’s not much to differentiate airlines, which means they have little pricing power; and profits can be thin and cyclical. Much the same could be said for AI – expensive to construct and operate, with little difference between models and the infrastructure powering it all. “We’re going to have a situation where investors get spooked, and they realize that those profits are cyclical rather than structural,” he says. “They’ll come down when the supply of compute catches up with demand. That’ll be the end of the whole AI trade.”
Given how much of the AI world revolves around OpenAI, any turbulence at the company could prove to be another catalyst. “The problem with OpenAI is they’re consuming so much capital,” says John Ruffolo at Maverix. “The moment they can’t find capital, this is where it starts to choke.”
OpenAI’s new data centre in Abilene, Tex., is under construction in September, 2025. The company has committed to US$1.4-trillion in spending over the next eight years.Shelby Tauber/Reuters
OpenAI is on track for US$20-billion in annualized revenue this year, but it has committed to US$1.4-trillion in spending over the next eight. Even if OpenAI grew revenue by a factor of 20 over that time period (which would be astounding) that still leaves a massive, yawning chasm that threatens to swallow the AI industry. (Mr. Hendin, though, chuckled at the idea of OpenAI struggling to raise funds. “They would have investors fighting over themselves,” he said.)
News reports have suggested OpenAI plans to go public next year, and a closer look at its finances could imperil confidence, too. “That could be the sunlight that disinfects,” says Brian Madden, chief investment officer at First Avenue in Toronto. But with all financial frenzies, it’s impossible to know what the trigger could be. “Sometimes it’s nothing. It just ends because the psychology turns,” he says.
The fallout from an AI investment bust would be severe enough to tip the U.S. economy into a recession, Mr. Berezin says. AI has propped up the stock market, which has in turn supported consumer spending, preventing a weak U.S. economy from contracting. Should AI stocks burst, there goes the wealth effect from the stock market, triggering a vicious cycle. “Because stocks are falling, there’s less spending in the economy,” he says. “Lower sales mean lower corporate earnings and even lower stock prices.”
Doubt is already creeping into the market. Stocks tied to AI have shuddered in the past month, with Oracle and Meta each down about 20 per cent. Fermi, the speculative AI energy company, has seen its shares cut in half. CoreWeave is down close to 40 per cent, even though its most recent financial results showed a surge in revenue, narrowing losses and a large backlog.
The metaverse moment referred to by Mr. Berezin may have already happened, or least a version of it: Meta’s stock dropped 11 per cent one day in late October when Mark Zuckerberg talked about “aggressively” building out AI infrastructure to prepare for the arrival of superintelligence.
“Those investing in AI are buying a dream,” says Mr. Scilipoti at Veritas. “The moment there are cracks in the belief system, then people start asking more questions.”
We all wake up from dreams. Maybe the sooner reality hits, the better. Otherwise, the fallout could get a lot worse.
Alphabet chief executive Sundar Pichai said no company would be unscathed if the artificial intelligence boom collapses, as soaring valuations and heavy investment in the sector fuel concerns of a bubble.
Reuters