Last month, the tech world turned with rapt attention to one man, OpenAI CEO Sam Altman, for the launch of the company’s new AI model, GPT-5. The atmosphere resembled the scientists waiting to see the first atomic bomb test at Los Alamos. GPT-5 was supposed to be another great leap forward toward the promised godlike telos of artificial general intelligence (AGI), a machine smarter than any human and possibly smarter than the sum of human civilization itself.
But the release of GPT-5 was a dud. Despite being relentlessly hyped for years, the most significant part of the announcement was the release of a model router that dynamically selects the correct “level” of computation power spent toward answering questions. OpenAI pitched the router as a tool for faster, cheaper answers for simple queries. But as writer Ed Zitron detailed on his Substack, this adds little value for the customer, while limiting most access to the best models, thereby lowering OpenAI’s cloud bill. To paraphrase Peter Thiel, we wanted AGI and all we got was a model router.
The AGI dream was pushed heavily by Altman, who wrote in one blog post in early 2025 that “we know how to build AGI as we have traditionally understood it,” and in another that the Singularity, a moment of apocalyptic transformation in which superintelligent machines would surpass humans, had already started, albeit in a “gentle” way. Yet online observers dubbed GPT-5’s release “Gary Marcus Day,” in honor of Dr. Gary Marcus, an eminent AI scientist who has been writing about the structural limitations and fallibility of AGI since his 2001 book The Algebraic Mind.
Weeks after the release, Altman said that the GPT-5 rollout was poorly done and that many users were initially disappointed. He assured us the next model, GPT-6, would really be the one. He plans to spend trillions on data centers and warned that “overexcited” investors—but definitely not him—risked inflating an AI bubble, likening the current situation to the dot-com crash.
If the AGI dream is over or even delayed, the investor nightmare is just beginning. The fallout from AGI hucksters like Altman won’t just devastate Silicon Valley and the tech sector. The U.S. economy is dangerously dependent on Big Tech and has priced its investments on the promise of AGI. What happens, not just to the Valley but to the global economy, if there is no AGI coming?
What Did Ilya See?
“What did Ilya see?” became a meme online following Ilya Sutskever’s shocking choice to leave OpenAI in 2024 after the reinstatement of Altman as CEO. If anyone can claim they invented ChatGPT, it would be Sutskever. He was a co-inventor of AlexNet, the first neural network to take the world by storm, a co-founder of OpenAI, and its lead researcher for model development until 2023. Yet he signed on to the board’s decision to oust Altman for not being “consistently candid in his communications with the board.” People were incredulous: How could a top AI engineer leave when his company was inventing humanity’s final invention, AGI? And anyway, don’t bosses lie all the time? Who cares?
Perhaps Ilya saw something more troubling than Sam’s lack of trustworthiness. Soon after leaving OpenAI, Sutskever gave a talk at a leading AI academic conference that, while technical, was considered heretical within the industry. Sutskever predicted the end of pre-training scaling—the almost magical quality of large language models (LLMs) like ChatGPT to get smarter, more capable, and even learn new skills not explicitly programmed into them by simply making the model bigger.
The jump from GPT-2 (which no one outside Silicon Valley cared about) to GPT-3 (which launched ChatGPT in late 2022) derived from making the base model much, much larger. GPT-4 followed the same recipe and felt like a similar leap forward. It seemed so simple: more data, better models. This set off a race to build the biggest, baddest AI model in the world. But judging by GPT-5, that strategy is nearing an end, just like Ilya saw.
On this question, I spoke with Dr. Marcus as he was driving in his convertible, presumably on a long-awaited victory lap. What the GPT-5 launch proved, Marcus said, is that LLMs alone are not the “royal road to AGI.” He has been warning about the limitations of scaling AI, due to LLMs’ lack of symbolic reasoning and penchant for hallucinations, since the year ChatGPT first launched.
Unlike most software we’re familiar with, LLMs are probabilistic in nature. This means the link between the dataset and the model is broken and unstable. This instability is the source of generative AI’s power, but it also consigns AI to never quite knowing the 100 percent truth of its thinking. This explains the chasm in model performance by ChatGPT. One day, it is helping conduct Ph.D.-level research, and the next it is miscounting the number of R’s in strawberry, or struggling to answer whether 9.11 or 9.9 is a bigger number. (Note: 9.9 is bigger, a fact I determined without the use of AI.)
Hallucinations are a by-product of this structural flaw in the way LLMs operate. Because pre-training scaling makes the models bigger but does not constrain or shape their behavior, it cannot solve the hallucination issue. We saw this when GPT-4o was released last year, and its hallucination rate increased compared to earlier models.
To be clear, LLMs are a useful technology that is here to stay in some form. The ability to understand and respond in natural language; to turn a single prompt in plain English into an essay or animated video; and to learn from mistakes are all genuine achievements that will be a part of every AI system moving forward. The productivity gains from these tools, while not as auspicious as once forecasted, are real. They are “one part of AGI,” according to Marcus, but they are missing key elements of intelligence, like symbolic reasoning, or the ability to symbolically code some things as “true” or “false.”
Marcus has long pushed for neurosymbolic approaches to reduce hallucinations. A neurosymbolic-LLM union may be the path forward, now that LLMs are slowing in progress without having solved their fundamental issues. But it will be years before new approaches can replicate the awesome progress of the last few years.
Maybe Sutskever saw the end of the scaling laws and left before he knew his boss would start promising the world something that couldn’t be delivered. For Marcus, it’s been clear that we’ve “been in an era of diminishing returns for AI” for some time now. While newer approaches such as instructing AI models to think (e.g., “test-time scaling”) improve outputs, they show no capacity to match the great leaps forward in quality and capacity that pre-training did. That means that AGI, however defined, is out of reach with current methods. Even consulting giant Gartner recently moved generative AI off its “Peak of Inflated Expectations” phase and views it on a downward slope toward a “Trough of Disillusionment,” often a time of decreased investment and interest.
If rapid progress from the scaling laws has ended, it doesn’t just mean the technology must go back to the drawing board. It means the world-historical bets investors have made on AI will begin to sour.
This Is How the AI Economy Stops—Not With a Bang but a Whimper
Bubbles form and pop all the time in Silicon Valley, although the sheer size of this one will leave a lasting wound in the venture capital industry. Last year, VCs poured a record $110 billion into AI startups, representing an astonishing 42 percent of the entire VC funding pie. That’s a far cry from global VC investments, only 18 percent of which went to AI. Silicon Valley, known for its alleged contrarian style of thinking, seems to be herding itself off a cliff.
But if it’s true that the expected exponential gains from scaling pre-training are instead facing diminishing returns, if AGI is still a faraway dream and the industry’s technical leader (OpenAI) has no clear pathway to it, then the Valley has made a terrible bet. According to CB Insights, there are 498 AI unicorns, or startups worth at least $1 billion, turning the $350 billion in VC investments since 2021 into a total valuation of $2.7 trillion. Those valuations will be hard to justify if no one has a ready solution to keep the current pace of progress going.
Despite expecting to hit $20 billion in annual recurring revenue (ARR) by the end of this year (though, as Zitron notes, the math here is a little fuzzy), Altman projects OpenAI to be unprofitable and lose $5 billion. By 2026, OpenAI projects its losses to grow to $14 billion a year, despite seeing another jump in ARR to around $30 billion. The startup doesn’t expect to break even until it can clear $100 billion a year in revenue, the number targeted for 2029. Zitron considers OpenAI the paradigmatic example of what he calls “the rot economy,” a company that “burns billions to lose billions.”
Without new products or continued exponential progress, it’s difficult to see a pathway for OpenAI to hit $100 billion in revenue—for reference, about a third of the revenues that companies like Microsoft, Amazon, and Google make every year—in less than five years. It’s difficult to square the financials of a company that plans to lose more than $100 billion by the end of the decade as being as valuable as Coca-Cola or twice as valuable as Spotify, both centi-billion-dollar companies that have been profitable for years.
If AGI is still a faraway dream and the industry’s technical leader has no clear pathway to it, then Silicon Valley has made a terrible bet.
One OpenAI pivot in search of cash has been its own massive AI supercluster. Megaprojects like the estimated $500 billion Stargate, once hyped as a new Manhattan Project, pitch massive data centers as “infrastructure for intelligence.” The comparison is apt, but perhaps not one these builders may want to make. If AI data centers are like railroads or fiber-optic cable during the internet buildout, then we can expect many infrastructure builders to suffer punishing losses if the AI bubble pops. The largest railroad the U.S. built—the eastern half of the Transcontinental Railroad—went bankrupt. Two of the three largest builders of fiber-optic cable (WorldCom, Global Crossing) went out of business when the dot-com bubble popped.
By some measures, AI investment exceeds investment prior to the dot-com crash. According to the Financial Times, “In 2000, at the telecoms bubble’s peak, communications equipment spending topped out at $135bn annualised [$235 billion inflation-adjusted].” In a research note, Morgan Stanley puts spending on AI infrastructure at around $200 billion in 2024, and it’s expected to race past $300 billion this year. To support current AGI projections of computing need, by 2028 this buildout will cost an estimated $3 trillion in capex spending.
But each day without AGI brings us one day closer to a financial reckoning. The valuations of companies like OpenAI or Nvidia depend on that projected infrastructure being fully built and utilized. No one is spending an additional $1 trillion a year in capex without AGI-sized returns. The data center buildout will increasingly feel like a literal and figurative waste of energy. And investors may simply walk away. As the FT notes about the dot-com buildout, “The internet hasn’t disappeared, but most of the money did.”
While the bubble popping may be felt first in data centers, other key technology players could also see a dramatic fall. Like Cisco in the dot-com era, Nvidia sells the “picks and shovels” of the AI gold rush. Back then, Cisco’s networking equipment was needed to access internet speeds for businesses to go live on the web; now, people need GPUs to use AI, and Nvidia controls about 92 percent of that market. GPUs are the “spice” of the tech sector: necessary for the AI space yet dominated by a single source and almost immeasurably valuable. But if Nvidia can no longer sell GPUs at monopoly prices in an environment of limitless demand, then the whole company could stagger.
In the five-year run-up to the dot-com crash, Cisco saw its stock rise 17,000 percent, making it the most valuable company in the world at the time; in the last six years, Nvidia’s stock has grown by 3,653 percent, and it also briefly became the world’s most valuable company in the summer of 2024. During the dot-com crash (2000–2002), the NASDAQ lost 80 percent of its value, and so did Cisco. How would the market handle a similar decline of more than 50 percent in Nvidia, and a precipitous drop in Big Tech spending on data centers?
A Cascading Nightmare
In recent runs of market trouble, Nvidia has always led the market in losses, reflecting investor understanding of its importance. But while Nvidia will be hit first and hardest, the entire tech industry could quickly come under threat.
Tesla, for example, is the ur-AGI bet for Wall Street. Seasoned financial professionals swallowed Elon Musk’s many promises of fully self-driving Teslas (for more than a decade now), Optimus humanoid robots, or plans to fly to Mars once Starship is up and working. Investors see huge future returns in Tesla, setting a price-to-earnings multiple of 188. For comparison, the world’s largest electric-vehicle producer, BYD, trades at a multiple of 27.
Tesla’s core business is rotting right as the speculative one runs into the AGI apocalypse. The company has seen six straight quarters of declining sales. The Cybertruck, a rolling insult to the concept of design and taste, has been declared a flop. Tesla lost the $7,500 tax credit for EV buyers from the Inflation Reduction Act and the lucrative carbon credits they sell to gas-powered automakers. The revenue stream from the carbon credits netted Tesla $11.8 billion over its lifetime; according to financial analyst Gordon Johnson of GLJ, “Without regulatory credit sales, Tesla loses money in its core business.”
Musk’s approach to autonomous vehicles and humanoid robots has always relied on scaling. But without robots or robotaxis deployed soon, the idea of Tesla being worth more than every other automaker in the world combined, as it was last December, starts to look suspect.
When you turn to the broader U.S. economy, the numbers really make you sweat. Just seven companies (Amazon, Apple, Tesla, Nvidia, Meta, Alphabet, and Microsoft—the “Magnificent Seven”) now account for 34 percent of the total S&P 500 stock index, an all-time record for market concentration. In the first two quarters of 2025, Big Tech spending on AI infrastructure contributed more to U.S. GDP growth than consumer spending, which is a little worrisome, since the U.S. consumer makes up 70 percent of the domestic economy.
On the one hand, this is a testament to the sheer scale of the AGI buildout; on the other, it is a flashing red light for an economy with frightened consumers, a softening labor market, a frozen housing market, and roiling uncertainty from Trump’s tariffs. The economic tide seems to be rushing out everywhere except the tech economy, but that may have already gone bust.
Capital can certainly be patient: Investors waited for years for Amazon to turn a profit. But that’s more the exception than the rule. It may only take a few AI stocks tumbling, and a more ambient disinclination to fund a technology that may be encouraging people to commit suicide, for the animal spirits to kick in and for everyone who made a giant directional bet on a technology in limbo to head for the exits.
The AGI bet comes from the same “rot economy” playbook that gave us WeWork, NFTs, the blockchain, the metaverse, and Theranos. But much to our detriment, Wall Street, pension funds, private capital, and foreign sovereign wealth funds all bought into it, not wanting to miss the “next internet.” The AGI dream has wormed its way into public, private, and international markets, all singing the same siren song of transformation, innovation, and disruption.
In 2007, a collapse in the subprime mortgage market blew a big enough hole in the economy to cause the Great Recession, an international conflagration rivaling only the Great Depression in scale and scope. Markets have shrugged off economic tremors and eye-watering single-day losses before. But if the AGI bubble pops, Silicon Valley and the stock market face billions—potentially trillions—in dead capital, amid their most fragile state in years. Unlike during COVID, no same-month bailouts can be expected to stem the bleeding. When the AGI bubble bursts, can the contagion be contained?