Fears of an AI bubble overlook where tech experts believe a lot of real value in the AI economy will come from: not large language models themselves, but in what we’ll build on top of them. It’s early days, but we’re already starting to see useful applications. Already, AI can help you code, detect spam for you and, perhaps controversially, generate increasingly realistic videos on the spot.
Those in the tech industry compare it to the internet: a general-purpose technology that will generate real economic value from innovations built to work with it.
And yet, fears are mounting that this potential revolution is built on an expensive bubble. Valuations are high, and four Silicon Valley giants alone are expected to spend an eye-watering $400 billion US on AI this year. But beyond the stock market nerves, what do actual computer scientists think of the prospects for AI?
WATCH | Investors are pouring billions into AI:
If the AI bubble pops, will the whole U.S. economy go with it? | About That
As investors pour billions into artificial intelligence, warnings of a looming AI bubble are intensifying. Andrew Chang breaks down what’s fuelling those fears. Plus, how one reporter’s question struck at the core of U.S.-Saudi relations.AI as the foundation of a new economy
Daniel Wigdor, a computer scientist at University of Toronto, says chatbots like ChatGPT “are just very thin and simple demos built on top of incredibly powerful technologies.”
“It’s almost like trying to judge the value of electricity when the only use of it that you see is a blinking light,” said Wigdor, who is also CEO of AXL, an AI-focused venture studio.
Computer scientist Daniel Wigdor says we’ve only seen the beginning of what AI can do. (Sean Pollock)
So, just as electricity went beyond that blinking light to powering say, the 20th-century factory, AI could power lots of innovative products.
Independent tech analyst Benedict Evans says at “a minimum, this is a sort of equivalent shift in the capability of computers to smartphones or the internet or PCs.”
Of course, we already rely on some form of AI every day, from helping us take better photos, to regulating traffic lights. But the current buzz is specifically about the potential of LLMs. Said Wigdor: “For the first time, we’re now starting to get these platforms where we can very quickly and easily train an AI.”
The challenge is that at the same time as vast amounts of money are pumped into building LLMs, those transformative applications are just emerging.
Benedict Evans, an independent tech analyst, notes that a technology can be ‘completely transformative’ and still a bubble at the same time. (Submitted by Benedict Evans)
Arvind Narayanan is a computer science professor at Princeton University and co-author of the book AI Snake Oil, which explains what AI can do and where it’s overhyped. Major technological innovation comes in stages, he said: “There’s the development of the underlying technology,” followed by putting those technologies into useful products, then early adoption and finally, broad dispersal across industries.
“Those are four different things and they happen on four different timescales.”
Winners and losers
The OpenAIs and Googles of this space stand to win if their investments and strategy pay off. And there are Canadian success stories like Cohere, which has thrived by finding a niche in business applications.
But the ultimate winners may be companies that don’t exist yet. Right now, the leading AI players are the ones developing LLMs, which can take advantage of their first-mover status to sell their technologies, Wigdor thinks.
Arvind Narayanan, a computer science professor at Princeton University, says major technological innovation comes in stages. (Submitted by Arvind Narayanan)
Historically, with foundational technologies, “it’s not the people who invent them who are going to be the ones necessarily who are the most creative users of those technologies. And so you get creative users coming along and building new things and building on top of it,” he said.
Wigdor compares it to an investor at the dawn of the smartphone era choosing whether to invest in a telecom like AT&T or a social media company like Facebook. “It’s a no-brainer,” he said.
But the concern is whether tomorrow’s technological revolution rests on the health of a few AI leaders building big models while placing huge financial bets. Narayanan argues this is also an issue from a technical point of view, in that “there are a lot of risks in terms of how much energy is being concentrated in just one way of doing things.”
There is, though, a role for smaller, open source alternatives to the big, proprietary models. Narayanan thinks “the absolute leading models are probably going to be proprietary ones, but the open ones are going to be probably just a few months behind the state of the art.”
Notably, just this week, Chinese company DeepSeek released a new open source model, which promises to be better at coding and math.
WATCH | More about DeepSeek:
DeepSeek shows AI can be done on the cheap, says tech analyst
Devindra Hardawar, senior editor at Engadget, says DeepSeek performs better in initial tests, was cheaper to build and runs on older equipment than its American competitors. Its release kicks off a ‘year of reckoning’ for the AI space, he says.
Kevin Leyton-Brown is a computer science prof at UBC who runs the Centre for AI Decision-Making and Action there. He said “a lot of capabilities don’t need the really biggest models,” noting that even the big players like OpenAI or Google “don’t use their super expensive model most of the time when they interact with you.”
Crucially, the role today’s AI giants play may depend on how much those big guys decide to build: are they just going to build the LLMs that serve as the basic technology, or are they going to build all the applications themselves, too?
“The tech way of describing it would be how far up the stack does the model go?” said tech analyst Evans. Do you “just do everything by asking ChatGPT? Or does it get unbundled and you have many layers of different applications?”
Kevin Leyton-Brown, a computer science prof at the University of British Columbia. (Submitted by Kevin Leyton-Brown)So, are we in a bubble or not?
Even if this technology is transformative, that doesn’t mean there isn’t a mismatch between that potential and short-term reality. Evans notes that “something can be completely transformative and a bubble, can’t it?”
He compares it to the development of the internet. “Like everything that people said about the internet in 1999 was true. It just happened 10 years later, [and] for different companies,” he said, referring to early speculation on internet companies that ended with the dot-com crash.
None of this has been helped by AI companies’ at times bombastic rhetoric about “superintelligence.” The reality is it may be some time before work processes and systems adopt the new technology. And it’s also possible that different approaches to advanced AI, such as so-called world models, will ultimately triumph over LLMs.
Leyton-Brown believes “there’s pretty strong evidence that something enormous is going on here.”
“We don’t know exactly what it’s good for and what its inherent limitations are, and what its killer app is,” he said, but we do know that this technology is enabling a lot of things that used to be difficult or impossible to do.
It’s the gap between where we are now and that “killer app” that remains an open question.