Stay informed with free updates

People rightly spend a lot of time worrying about whether AI will ever be smart enough to destroy us. But what if it has already started?

This thought occurred as I surveyed a list I have been keeping of notable AI blunders in 2025 or, to be more precise, the humans brought low by using what we are told is today’s relatively basic artificial intelligence.

The technology may well be benign. But the list is still a reminder that our very human qualities of laziness, greed and ambition make us easy meat for it.

This is evident in my own industry, where the year began with some BBC app users receiving the startling news that Spanish tennis great, Rafael Nadal, had a) become a Brazilian and b) revealed he was gay.

As the BBC reported, the news summary bearing this false information had been generated by an AI feature that Apple had rolled out for people with its latest iPhones. The tech giant later suspended the feature, which had also issued other false alerts.

What sort of competitive pressures might have led to its launch in the first place? That’s a question I wondered about a few months later when the Chicago Sun-Times newspaper said it had had a “learning moment” after presenting its readers with a summer reading list that included recommendations for books that did not, as it were, exist.

The list had come from a freelancer who worked with one of the newspaper’s “content partners” and had used AI.

Only months later in Pakistan, another apology emerged from the Dawn newspaper about a story on car sales that ended with a mortifying AI prompt offering to make the article “even snappier” with “punchy one-line stats” for “maximum reader impact”.

It’s easy to make mistakes in journalism. I’ve made them too often myself. But it’s worrying to imagine an industry beset by financial woes being increasingly drawn to cost-saving AI tools that make even more errors.

On the upside, newspaper bungles rarely put you in hospital. The same cannot be said for the AI information one man used to help cut his salt intake. He ended up being hospitalised with a condition known as bromism, prompting researchers to warn in August that using AI for medical advice can lead to “preventable adverse health outcomes”.  

The worlds of medicine and the media are not alone. 

The law has suffered a multitude of AI boo-boos. 

In Australia, a senior lawyer apologised to a judge presiding over a murder case that was delayed by AI-aided submissions that included fictitious quotes and non-existent cases. “At the risk of understatement,” said the judge, “the manner in which these events have unfolded is unsatisfactory.” 

You can read about the matter yourself on a database that tracks legal decisions involving AI hallucinations. It has found nearly 700 examples since April 2023 — five months after ChatGPT’s public launch.

Are there any fewer in other professional services?

I suspect there was a sharp intake of breath among consultants when they read October’s news that Deloitte would be partially refunding its payment for an error-blighted US$290,000 report it did for the Australian government that was partly produced by AI.

Consultancies, like so many other businesses, are investing millions into AI to maintain and raise their competitive edge. 

There is no such excuse for the field in which some of the most notable AI mishaps have continued to occur this year: politics.

I’m not referring here to AI’s ability to spread mass disinformation and other threats to democracy. It is the more mundane yet worrying way in which politicians across the world have taken to this troublesome technology. 

“We didn’t vote for ChatGPT,” has become a familiar refrain since Sweden’s prime minister, Ulf Kristersson, told a reporter he consulted AI tools for a second opinion on certain things. Others have used AI in what turned out to be inaccurate submissions to legislative hearings.

But the hardest move to comprehend came in September in Albania, where Prime Minister Edi Rama introduced an AI-generated government “minister” named Diella as a member of his cabinet. 

It may be a symbolic move, since the country’s constitution requires ministers to be citizens aged at least 18. But it is a disheartening one. It’s hard enough to know where you are with a human, for all our flaws and faults. With AI, as we are rapidly learning, it is pretty much impossible.