This article is an on-site version of our The State of AI newsletter. To read earlier editions of the series, click here. Explore all of our newsletters here
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review.
In this final edition, MIT Technology Review’s senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about where AI will go next and what our world will look like in five years’ time.
We really hope you’ve enjoyed the series and we’d love to hear your thoughts so we’ve created a short survey — if you have any ideas please do share them.
You can find all of the earlier discussions on the US vs China, energy constraints, chatbots, economic singularity and the future of war here. And a last reminder for subscribers: on Tuesday December 9 at 1pm ET (6pm GMT) you can join the FT’s Richard Waters along MIT Technology Review’s David Rotman and Mat Honan for an exclusive live conversation on how AI is reshaping the global economy. Register for the event here.
Will Douglas Heaven writes
Every time I’m asked what’s coming next, I get a Luke Haines song stuck in my head: “Please don’t ask me about the future/I am not a fortune teller.” But here goes. What will things be like in 2030? My answer: same but different.
There are huge gulfs of opinion when it comes to predicting the near-future impacts of generative AI. In one camp we have the AI Futures Project, a small donation-funded research outfit led by former OpenAI researcher Daniel Kokotajlo. The non-profit made a big splash back in April with AI 2027, a speculative account of what the world will look like two years from now.
The story follows the runaway advances of an AI firm called OpenBrain (any similarities are coincidental etc) all the way to a choose-your-own-adventure-style boom or doom ending. Kokotajlo and his co-authors make no bones about their expectation that in the next decade the impact of AI will exceed that of the Industrial Revolution — a 150-year period of economic and social upheaval so great that we still live in the world it wrought.
At the other end of the scale we have team Normal Technology: Arvind Narayanan and Sayash Kapoor, a pair of Princeton University researchers and co-authors of the book AI Snake Oil, who push back not only on most of AI 2027’s predictions but, more importantly, its foundational worldview. That’s not how technology works, they argue.
Advances at the cutting edge may come thick and fast, but change across the wider economy, and society as a whole, moves at human speed. Widespread adoption of new technologies can be slow; acceptance slower. AI will be no different.
What should we make of these extremes? ChatGPT came out three years ago but the jury is out on just how good various models are at replacing lawyers or software developers or (gulp) journalists, and new generations of models no longer bring the step changes in capability that they once did.
And yet this radical technology is so new it would be foolish to write it off so soon. Just think: nobody really even knows how the technology works — let alone what it’s really for.
As the rate of advance in the core technology slows down, applications of that tech will become the main differentiator between AI firms. (Witness the new browser wars and the chatbot pick and mix already on the market.) At the same time, high-end models are becoming cheaper to run and more accessible. Expect this to be where most of the action is: new ways to use existing models will keep them fresh and distract people waiting in line for what comes next.
Meanwhile, progress continues beyond LLMs. (Don’t forget there was AI before ChatGPT and there will be AI after it too.) Technologies such as reinforcement learning, the powerhouse behind AlphaZero, DeepMind’s board game-playing AI that beat a Go grandmaster in 2016, are set to make a comeback. There’s also a lot of buzz around world models, a version of generative AI that has a stronger grip on how the physical world fits together than LLMs do.
Ultimately, I agree with team Normal Technology that rapid technological advances do not translate to economic or societal ones straight away. There’s just too much messy human stuff in the middle.
But Tim, over to you. I’m curious to hear what your tea leaves are saying.
Tim Bradshaw replies
Will, I am more confident than you that the world will look quite different in 2030. In five years, I expect the AI revolution to have proceeded apace. But who gets to benefit from those gains will create a world of AI haves and have-nots.
It seems inevitable that the AI bubble will burst sometime before the end of the decade. Whether a venture-capital funding shake-out comes in six months or two years (I feel like the current frenzy still has some way to run), swaths of AI app developers will disappear overnight. Some will be absorbed by the models upon which they depend for their underlying intelligence. Others will learn the hard way that you can’t sell services that cost $1 for 50 cents without a fire hose of VC funding.
How many of the foundation model companies survive is harder to call though it already seems clear that OpenAI’s chain of interdependencies within Silicon Valley makes it too big to fail. But a funding reckoning will force it to ratchet up pricing for its services.
When OpenAI was created in 2015, it pledged to “advance digital intelligence in the way that is most likely to benefit humanity as a whole”. That seems increasingly untenable. Sooner or later, the investors who bought in at a $500bn price tag will push for returns. Those data centres won’t pay for themselves. By that point, many companies and individuals will have come to depend on ChatGPT for their everyday workflows. Those able to pay up will reap the productivity benefits, scooping up excess computing power as others are priced out of the market.
Being able to layer several AI services on top of each other will provide a compounding effect. One example I heard on a recent trip to San Francisco: ironing out the kinks in vibe coding is simply a matter of taking several passes at the same problem and then running a few more AI agents to look for bugs and security issues. That sounds incredibly GPU-intensive, implying that making AI deliver will require customers to pay far more than most do today.
The same holds true in physical AI. I fully expect robotaxis to be commonplace in every major city by the end of the decade and even humanoid robots to be in many homes. But while Waymo’s Uber-like prices and the low-cost robots produced by China’s Unitree give the impression that they will be affordable for all, the compute cost involved in making them useful seems destined to turn these into luxuries for the well-off.
Perhaps some breakthrough in computational efficiency will avert this fate. But the current investment boom means Silicon Valley’s AI companies lack the incentives for making leaner models. That raises the likelihood that the next wave of AI innovation will come from outside the US, be it China, India or somewhere more left-field.
Silicon Valley’s AI boom will surely end before 2030. But the race for global influence over the technology’s development — and the political arguments about how its benefits are distributed — will continue well into the next decade.
Will Douglas Heaven responds
I am with you that the costs of this technology are going to lead to a world of haves and have-nots. Even today $200+ a month buys power users of ChatGPT or Gemini a very different experience to people on the free tier. That capability gap is certain to increase as model makers seek to recoup costs.
We’re going to see massive global disparity too. In the global north, adoption has been off the charts. A recent report from Microsoft’s AI Economy Institute notes that AI is the fastest-spreading technology in human history: “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.” And yet AI is useless without ready access to electricity and the internet; much of the world still has neither.
I still remain sceptical that we will see anything like the revolution that many insiders promise (and investors pray for) by 2030. When Microsoft talks about adoption here, it’s counting casual users rather than measuring long-term technological diffusion, which takes time. Meanwhile, casual users get bored and move on.
How about this: If I live with a domestic robot in five years’ time, you can send your laundry to my house in a robotaxi any day of the week.
JK! As if I could afford one.
Further reading
What is AI? It sounds like a stupid question, but it’s one that’s never been more urgent. In this deep dive, Will unpacks decades of spin and speculation to get to the heart of our collective technodream
AGI — the idea that machines will be as smart as humans — has hijacked an entire industry (and possibly the US economy). For MIT Technology Review’s recent New Conspiracy Age package, Will takes a provocative look at how AGI is like a conspiracy
The FT examined the economics of self-driving cars this summer, asking who will foot the multi-billion-dollar bill to buy enough robotaxis to serve a big city like London or New York
A plausible counter-argument to Tim’s thesis on AI inequalities is that freely available open-source (or more accurately, “open weight”) models will keep pulling down prices. The US may want frontier models to be built on US chips but it’s already losing the global south to Chinese software
Recommended newsletters for you
The AI Shift — John Burn-Murdoch and Sarah O’Connor dive into how AI is transforming the world of work. Sign up here
Newswrap — Our business and economics round-up. Sign up here