Artificial intelligence got personal for me last week. It also became clear how quickly it’s developing now and that it is more complicated than we thought.
A week ago, someone sent me a link to an online article describing a flaming confrontation between me and the CEO of the Commonwealth Bank, Matt Comyn, on the set of 7.30.
The story was 2,000 words long, very detailed, and had pictures of Comyn and me arguing in front of 7.30 host Sarah Ferguson, before Matt throws away his microphone and storms off.
Not a word nor a photo of it was true. It was an AI fake. I won’t link to it because it would draw more attention to it. The article was convincing, and many people have since written to me asking whether it is true.
Then that night, a bloke named Matt Shumer, who co-founded an AI company in New York City, published a 4,800-word essay headed “Something Big is Happening”, which has since gone viral.
In it, Shumer talks about two AI releases the previous week that came within minutes of each other.
He wrote: “… on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch … more like the moment you realise the water has been rising around you and is now at your chest”.
He went on to say: “I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just … appears.”
Mind you, cars have been doing this for a while. I got a call last week from someone at Tesla offering to show me his autonomous car. He says he tells it to drive to the airport and it does — he doesn’t touch the steering wheel, the brake, or the accelerator for the entire trip. In Australia, it’s against the law to sit in the back seat reading a book, but that is allowed in a few American cities, and driverless taxis are now doing 450,000 trips per week.
AI wake-up call
It’s been a big week for wake-up calls.
On Tuesday, Mrinank Sharma, the head of safeguard research at Anthropic (the company that developed Claude AI), resigned, posting a letter on X: “The world is in peril. Not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.
“We appear to be approaching a threshold where our wisdom must grow in equal measure to our ability to affect the world, lest we face the consequences. Moreover, throughout my time here, I’ve repeatedly seen how hard it is to let our values govern our actions.”
He says he’s going off to study poetry and devote himself to the practice of “courageous speech”.
On Wednesday, I had coffee with a young man, 27, who decided a couple of years ago to start a business training people in how to use AI. It didn’t work because he quickly realised there is no point teaching people to use something that is simply going to replace them. Instead, he’s building an AI agent that will replace receptionists by answering the phone and dealing with customers’ queries so they don’t know it’s not human. At this stage, he says, it will tell customers they’re talking to an AI.
The AI will have all the company’s data and be able to tell customers whatever they want to know.
On Wednesday night, our time, Jimmy Ba, one of the co-founders of Elon Musk’s xAI that developed Grok AI, also resigned. He posted this: “We are heading to an age of 100x productivity with the right tools. Recursive self-improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”
On Thursday, the head of AI at Microsoft, and one of the founders of Google’s DeepMind, Mustafa Suleyman, told The Financial Times: “White-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.”
How will investors respond?
It’s now dawning on the world, including investors for the first time, that artificial intelligence is moving well ahead of any plans to deal with its consequences.
The near-simultaneous release of GPT-5.3 Codex by OpenAI, and Opus 4.6 by Anthropic was one of two events in early February that brought “AI fear” (as analysts call it) to the share market.
The first, on February 3, was Anthropic’s release of plug-ins for Claude Cowork that will allow it to automate professional work, starting but not ending with the back-office work done in law firms.
Software firms and data like Salesforce, Thompson Reuters and Adobe saw their shares drop sharply, and in Australia, software companies like Technology One, Xero and Wisetech did likewise.
Last weekend, a Chinese lab called OpenBMB released an AI agent called MiniCPM-o 4.5, which is open-source and has 9 billion parameters.
It runs on local devices, like laptops and smartphones, and doesn’t need data centres, so suddenly there are question marks around the massive investment in them.
I asked Google’s Gemini AI how that compares with its own parameters. It replied that it “likely” has more than a trillion parameters and remarked: “Comparing a model like MiniCPM-o 4.5 to a flagship like Gemini 3 Flash is a bit like comparing an agile, high-performance sports car to a massive, hyper-efficient logistics network. They both get the job done, but their ‘engines’ are built very differently.”
Oh right.
LoadingThe fourth industrial revolution is here
As I lay awake at the end of the week staring at the ceiling in the dark, I realised that my earlier frames of reference for AI and what’s being called the fourth industrial revolution are suddenly inadequate.
We asked AI to make Indigenous art. Do the results amount to ‘cultural theft’?
I had been thinking about AI as both a return to labour that is owned — a part of capital — rather than employed, about two centuries after the abolition of human slavery, and simultaneously as a collapse in the price of intelligence.
The price of intelligence is falling towards zero in the same way that the price of communication fell with the invention of packet switching and TCP/IP. That is, the internet.
Those two aspects of AI explain why every company in the world is rushing to use it: costs will fall, and productivity will surge. As a result, so will profits and every corporate executive is required to increase profits.
Moreover, company owners and their boards are motivated to shift production from labour to capital wherever possible because capital is where wealth is created, while employed labour drains it.
But what’s happening is far more profound, even than the issues of cost and ownership, as profound as they are.
We are building — or perhaps have already built — an alien intelligence that thinks at least one thousand times faster than any human, knows every bit of knowledge that every human has ever known, is evolving a million times faster than human evolution and will end up vastly outnumbering humans.
Calling it another industrial revolution seems insufficient.
Alan Kohler is finance presenter and columnist on ABC News and he also writes for Intelligent Investor.