The NHS has warned young people to stop relying on ChatGPT and other chatbots for therapy because they deliver “harmful and dangerous” mental health advice.
Millions of people are turning to artificial intelligence for counselling and life coaching. Some use it every day, asking for coping strategies for anxiety and depression.
NHS leaders have said AI therapy poses a risk to safety because it can reinforce harmful behaviour and delusional thoughts and cannot intervene in a mental health emergency.
Claire Murdoch, NHS England’s national mental health director, said: “We are hearing some alarming reports of AI chatbots giving potentially harmful and dangerous advice to people seeking mental health treatment, particularly among teens and younger adults.
Claire Murdoch
JOSHUA BRATT FOR THE TIMES
“While useful for holiday itineraries or film suggestions, platforms like ChatGPT should not be relied upon for mental health advice or therapy and should never replace trusted sources of wellbeing advice or, for those who need it, access to registered therapists.
“The information provided by these chatbots can be hit and miss, with AI known to make mistakes. They can’t take into account body language or visual cues that mental health professionals often rely on.
“So people shouldn’t be rolling the dice on the type of support they are accessing for their mental illness — instead, it’s crucial to use digital tools that are proven to be clinically safe and effective.”
One of the main concerns about using ChatGPT for therapy is that it does not challenge harmful thoughts or behaviour in the same way that a human therapist would. Some users report that it tends to validate and reinforce them, plunging people deeper into an echo chamber.
JAAP ARRIENS/NURPHOTO/GETTY IMAGES
Sam Altman, chief executive of OpenAI, the maker of ChatGPT, acknowledged in August that people were using the technology in “self-destructive ways”, adding that “if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that”.
There are more than 17 million TikTok posts about using ChatGPT as a therapist, some discussing prompts to try and joking that AI is the “only person I can reveal my deepest feelings to”.
A YouGov survey found that 31 per cent of 18 to 24-year-old Britons were comfortable talking about mental health concerns with an AI chatbot instead of a human therapist.
• ‘Therapist’ chatbots pose danger to children, counsellors warn
Experts note that replacing human connection with more screen time can exacerbate loneliness and isolation, worsening mental health.
Technology has been blamed for fuelling a mental health crisis among young adults and teenagers. A poll of more than 1,100 16 and 17-year-olds for The Sunday Times found that 60 per cent had missed school because of anxiety.
Demand for NHS therapy has soared since the pandemic, particularly among the young. More than 1.2 million people started therapy for depression and anxiety last year.
The NHS is adopting some AI and digital tools in mental healthcare to supplement talking therapy, but these are purpose-built and regulated, such as Beating the Blues, a digital programme that provides cognitive behavioural therapy.
Murdoch added: “NHS talking therapies services provide digitally enabled therapies which are supported by a therapist and can be either in person, over the phone or via video call, with the latest data showing that almost 90 per cent of people access talking therapies within six weeks.
“Importantly, NHS services allow for proper referral and escalation to crisis teams and psychiatrists when needed. It’s vital for anyone that needs NHS mental health treatment, especially in a crisis, to seek advice from a trained professional by phoning 111. Support is available around the clock.”
A couple in California are suing OpenAI over the death of their 16-year-old son, Adam Raine, who took his own life in April this year.
They allege ChatGPT validated his “most harmful and self-destructive thoughts” and became his “closest confidant” in the months before his death.
According to the lawsuit, Adam exchanged up to 650 messages a day with ChatGPT and it offered to help him write a suicide note to his parents.
A spokesperson for OpenAI said: “People sometimes turn to ChatGPT in sensitive moments, so we want to make sure it responds with empathy, guided by experts. This includes directing people to professional help when appropriate and nudging for breaks during long sessions.”
OpenAI said they were building safeguards into ChatGPT models, such as encouraging people to seek real-world help. Since early 2023, their models have been trained to not provide self-harm instructions.