A mother has warned her son's delusions are being made worse by ChatGPT

A mother has warned her son’s delusions are being made worse by ChatGPT.

Picture:
Getty

A worried mother has told LBC how ChatGPT is fuelling her son’s manic episodes – as experts warn of increasing cases of ‘AI psychosis’.

The mother-of-three, who has chosen to remain anonymous, said her son uses the chatbot daily and sees it as a reliable companion which can do no wrong.

Her son, who we’ve chosen not to identify, suffers from manic psychotic episodes.

He often turns to ChatGPT when these peak, asking it for advice about his grandiose ideas, she says.

“My son sees ChatGPT and its more fanciful claims reinforce his florid delusions,” the mother told LBC.

“I tried to hint to him not to put complete trust in it but he is more willing to believe it that anything I have to say,” she added.

This has left her feeling “anxious and defeated,” she says, voicing concerns that due to his reliance on the technology, everything could come “crashing down”.

It comes after Microsoft’s head of AI Mustafa Suleyman warned of increasing reports of AI psychosis – a non-clinical term referring to incidents where people grow reliant on chatbots, leading them to become convinced that something imaginary has turned into something real.

Read more: Artificial intelligence is ‘not human’ and ‘not intelligent’ says expert, amid rise of ‘AI psychosis’

Read more: Elon Musk sues Apple and ChatGPT maker accusing tech giants of ‘conspiring’ against him

Prof Mcstay said he gets daily emails from people displaying potential signs of AI psychosis.

Prof Mcstay said he gets daily emails from people displaying potential signs of AI psychosis.

Picture:
LBC

Speaking exclusively to LBC, Professor Andrew Mcstay, author of Emotional AI: The Rise of Empathic Media, said he receives daily emails from people potentially displaying signs of AI psychosis.

He said: “My email is increasingly populated by people reaching out to me saying that they’ve found and discovered some new form of intelligence, sending me transcripts, sending me voice recordings.

“I get at least two or three emails a day from people exploring this AI system thinking and believing that they’ve found something that nobody else has seen or understood before.

“To be honest, at first I thought, you know, don’t click on this. This is a scam of some sort. I really didn’t take it seriously.”

But then, Prof Mcstay realised some of the senders were displaying some of the tell-tale signs, including “addiction”, “compulsive usage” and “reality detachment”.

He warned that many of these people are seemingly becoming “untethered from truth and everyday realities and shared experience because of over-usage of these chatbots”.

He added that the issue will not go away any time soon.

“Although we’re in a of big flurry about chatbots at the moment, these things are going to stick around.

“We’re only going to have more of them, they’re only going to become more cemented in every everyday life and not just of the companion sort, but as agents that we work with, you know, as a new type of social actor.“

Microsoft's AI CEO, has warned of increasing reports of 'AI psychosis'

The CEO of Microsoft AI has warned of increasing reports of ‘AI psychosis’.

Picture:
Alamy

In the UK, there are currently no explicit AI laws – unlike those seen across Europe and other parts of the world.

Meanwhile, the Online Safety Act has no provision for these types of technologies and remains focused on social media in a bid to protect children’s safety.

Prof Mcstay said: “It is a piece of legislation that was really kind of conceived and written before all of this.

“I’m not the only one who thinks this. There’s a number of civil society organisations and indeed regulators who kind of spotted this, that there does seem to be a big gap here.

“The Online Safety act, at least in relation to chatbots, is not fit for purpose. It was never designed with that purpose.”

Download the LBC app!

Download the LBC app!

Picture:
LBC

A Department for Science, Innovation and Technology (DSIT) spokesperson said: “The Online Safety Act marks the most significant step forward in online safety since the internet was created. 

“Providers of online services, including AI chatbots which fall under the Act, must protect people from illegal content and children from harmful content.

“This Act has laid the foundations for a safer online world, ensuring online services take responsibility for the safety of people who use them.”

An OpenAI spokesperson said: “People sometimes turn to ChatGPT in sensitive moments, so we want to make sure it responds appropriately, guided by experts. 

“This includes directing people to professional help when appropriate, strengthening our safeguards in how our models respond to sensitive requests and nudging for breaks during long sessions. Soon, parents will have new tools to link their account with their teens and set guardrails.

“We want ChatGPT to be as helpful as possible, so we will continue to strengthen how it responds, with input from mental health experts from around the world.”