Ross Harper lives in fear. Not for his London start-up Limbic, Britain’s first — and only — clinically validated AI therapy chatbot, but for what might happen when something goes terribly wrong at another, less robust rival.

“One of my biggest concerns is that we have a huge setback as an industry because an unvalidated solution is allowed to be used in a high-stakes setting. And it will be unavoidable, unfortunately — there will be a tragic incident,” he said. “And then as a result, everybody will be incredibly nervous.”

These are wild times in artificial intelligence (AI), with all the “end of days” doom-mongering, the $200 million (£150 million) pay packets for coder and the fears for every type of white-collar job.

But few sectors meet the criteria that AI therapy does — of being both overrun with half-baked ideas while also operating in an arena with the highest of stakes.

Silicon Valley’s AI-fuelled madness has echoes of the dotcom crash

Limbic has developed a bot trained in cognitive behavioral therapy (CBT) techniques, the most common form of talk therapy, to help people dealing with mental health struggles. It has already been integrated into nearly half of NHS Talking Therapies services after gaining regulatory approval in 2023 as a certified medical device. As a digital “front door”, it handles intake, assesses patient needs, gets them booked into care and offers CBT therapy based on clinical care guidelines.

For a health service riven by long waiting times and striking doctors, Limbic’s bot provides a rare bright spot. It has already guided more than 500,000 people into care and saved tens of thousands of hours of precious human labour.

And yet, it is operating in a Wild West. Type “AI Therapy” into Apple’s App Store and a stream of options appear, from ChatGPT to Replika, maker of racy AI “friends”; from Elon Musk’s Grok, the bot that only last month referred to itself as “MechaHitler”, to a menagerie of “wellness” apps that sell themselves as therapists but, critically, lack the clinical underpinnings that any human would need to assign this label.

Photo illustration of a smartphone displaying the Grok AI app page on the App Store.

Some of Grok’s responses have raised concerns

CHENG XIN/GETTY IMAGES

“Companies are playing fast and loose with what is a protected term,” Harper said. “If you’re a human, you can’t call yourself a therapist if you don’t have a licence to do so.”

Indeed, a surfeit of stories have surfaced in recent months of people turning to ChatGPT for a sympathetic ear. Some strike up deep relationships, others have been guided deeper into their own delusions.

The profusion of AI therapy bots speaks not only to the peril but also the need. “The problem that we’re really targeting is one of supply and demand,” Harper said. “There are just not enough trained mental health professionals alive on the planet to serve the astronomical number of individuals struggling with a mental health issue.”

A 2023 report by the Virginia-based National Alliance on Mental Illness found that nearly half of the 60 million Americans with mental health issues go untreated. The gap is even worse in Britain, where the charity Mind estimates that only a third of people suffering from mental illness receive care.

The arrival of large language models (LLMs), capable of understanding complex ideas and responding to queries in natural language, presented an opportunity that dozens of companies have seized upon. Most take the form of direct-to-consumer apps that are free to download and charge a flat monthly fee for access.

Young people turn to AI for therapy over long NHS waiting lists

Limbic has taken a different approach: it works directly with health authorities. The system starts by taking a person’s details, and then asks a series of questions to identify the likely issue from which they are suffering in order to book them in with an appropriate professional. It can then also provide basic therapeutic chat, relying on clinical reasoning systems grounded in industry best practice.

The early results are promising. When Limbic is the front door, 15 per cent more people end up finishing the intake process and getting the right care. “You see an even bigger uplift for individuals from minority demographics,” Harper added — perhaps because, for some, talking to a bot as a first step is less daunting than asking a human for help.

Even when the absolute best care is delivered, however, sometimes things end badly. And when that care is administered by an algorithm rather than a human, the scrutiny will be immense. This is where, Harper reckons, he has the advantage. Limbic did not get certified as a medical device until 2023 — five years after its 2018 launch. It was an arduous process that involved building a clinical reasoning system from the ground up. So if or when something goes badly and an inquest is called, Harper said, “anyone can lift the hood and understand what the system did and why, and see that it was protocol adherent.”

Explaining its workings is critical to gaining regulatory approval, much in the same way the producer of a pacemaker must be able to explain the finer points of its devices. And that is something most self-branded “therapy” bots cannot do.

Last month, Slingshot AI, a two-year-old start-up, announced a $93 million funding round and the launch of Ash, “the first AI designed for therapy”. The New York company, however, issued a rather large caveat: do not trust everything Ash tells you.

“Ash is an AI — a relatively new technology — and it can make mistakes. Ash can hallucinate, forget critical bits of information, or share ideas that are (frankly) bad ideas,” reads a safety warning on its website. “We know that in the context of mental health, sometimes mistakes can cause real harm. That’s why it’s important that our users are aware of Ash’s limitations, and that they can use reasonable judgment as they absorb Ash’s suggestions and perspectives.”