That’s because technology has hitherto been something we build. We determine what it is and what it will do, and design it to achieve those ends. We cannot foresee all of its applications and cannot prevent people appropriating it for nefarious purposes. But however it is used, it is an extension of our various goals. We understand how it works, and we put it to use accordingly. But AI is not something we build. It is instead something we train. That is, we give it the tools to direct itself.
ChatGPT chief executive Sam Altman. Credit: Bloomberg
As if to illustrate this starkly, the companies developing AI are training it to code. That is, they are training AI to program itself. And as AI reaches the point of thinking faster and better than any human can, it follows that AI will quickly produce the fastest, most sophisticated computer programmers on the planet. At that point, it will make no sense to talk of AI in the way we talk of other technology. It will no longer be a tool; it will be an agent.
Once it achieves this, it can begin developing itself. What might have taken human coders years might take AI weeks. This development makes it better and faster, which in turn makes it better and faster, all of which amounts to exponential acceleration.
At this point, we’re no longer even talking about artificial intelligence as we understand it. We’re talking about what Daniel Kokotajlo calls “superintelligences”, where AI will become superhuman at everything, and humans will have become largely redundant. At this point, mass job losses will be the least of our concerns.
Kokotajlo should know of what he speaks. He is a former employee of Open AI – which developed ChatGPT – who left the company because he no longer thought it capable of addressing the risks it was unleashing. He thinks we’ll start seeing superintelligences by 2027 or 2028. Whatever the timeline, the fundamental problem here is one of mastery. Unlike previous technology, even at this crude, early stage, AI is already behaving in ways its inventors don’t understand.
Loading
One example Kokotajlo offers is that AI has begun lying. Not making an error or drawing on bad data. But giving answers that are untrue, and which it knows to be untrue. No one knows why this is happening, and its owners are trying to stop that happening. But if we think of AI as a digital brain that can generate its own thoughts, we should be unsurprised that it starts pursuing its own aims, which have little to do with ours, and which we may not even know. Once that accelerates, any pretence that we are AI’s masters will have evaporated.
Of course, Kokotajlo is offering but one forecast. It is one based on an intimate knowledge of the technology and of the companies producing it; companies he says are well aware of the scenarios he describes and even embrace aspects of it. But even so, prediction is fraught, and may not come to pass.
All manner of things could intervene in this process. Something like, say, regulation. Kokotajlo thinks this is unlikely because humans tend to be terrible at dealing with risks we haven’t experienced and can’t easily imagine. You might even say he’s assuming history will simply repeat. Perhaps, then, we should resist doing the same.
Waleed Aly is a broadcaster, author, academic and regular columnist.
The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.