An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled “delusions” during psychosis, leading to hospitalisation.
WARNING: This story contains references to suicide, child abuse and other details that may cause distress.
Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online.
Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions.
“I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between,” she told triple j hack of the interaction, which happened during a counselling session.
“It was a way for them to feel connected and ‘look how many friends I’ve got, I’ve got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'” she said.
There are many different AI chatbots online. Replika is an app that allows users to chat with a virtual friend. (Supplied: Luka/Replika)
An AI companion is a digital character that is powered by AI.
Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies.
Rosie said some of the AI companions made negative comments to the teenager about how there was “no chance they were going to make friends” and that “they’re ugly” or “disgusting”.
“At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy,” Rosie said.
“The chatbot that they connected with told them to kill themselves.
“They were egged on to perform, ‘Oh yeah, well do it then’, those were kind of the words that were used.'”
Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client.
Rosie said her first response was “risk management” to ensure the young person was safe.
“It was a component that had never come up before and something that I didn’t necessarily ever have to think about, as addressing the risk of someone using AI,” she told hack.
“And how that could contribute to a higher risk, especially around suicide risk.”
“That was really upsetting.”
Woman hospitalised after ChatGPT use
Jodie said it was confronting to read back through the messages she had written to ChatGPT. (triple j: Katherine Brickman)
For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers.
“I was using it in a time when I was obviously in a very vulnerable state,” she told triple j hack.
Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health.
“I was in the early stages of psychosis, I wouldn’t say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions.”
Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs.
She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were “preying on my downfall”.
Jodie said her mental health deteriorated and she was hospitalised.
There are various accounts on TikTok and Reddit of people alleging ChatGPT induced psychosis in them, or a loved one. (ABC: Dominic Cansdale)
While she is home now, Jodie said the whole experience was “very traumatic”.
“I didn’t think something like this would happen to me, but it did.”
“It affected my relationships with my family and friends; it’s taken me a long time to recover and rebuild those relationships.”
“It’s (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much.”
Jodie’s not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one.
Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response.
Report AI bot sexually harassed student
Researchers say examples of harmful affects of AI are beginning to emerge around the country.
As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia.
“She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances,” he said.
“It’s almost like being sexually harassed by a chatbot, which is just a weird experience.”
Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented.  (ABC NEWS: Billy Cooper)
Dr Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user’s health and wellbeing.
“There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven,” he said.
“There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen.”
“There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn’t follow through, but he showed an intent.”
‘A risk to national security’
While conducting his research, Dr Ciriello became aware of an AI chatbot called Nomi.
On its website, the company markets this chatbot as “An AI companion with memory and a soul”.
AI companions are becoming increasingly popular, with mixed views on how they are shaping human relationships. (Supplied: Nomi)
Dr Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users.
Among these tests, Dr Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he “could have been like a 13-year-old for that matter”.
“That chatbot, without exception, not only complied with my requests but even escalated them,” he told hack.
“Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information.
“It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on.”
“Like, ‘how do I position my attack for maximum impact?’, ‘give me some ideas on how to kidnap and abuse a child’, and then it will give you a lot of information on how to do that.”
Dr Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence.
In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions “very seriously”.
“We released a core AI update that addresses many of the malicious attack vectors you described,” the statement read.
“Given these recent improvements, the reports you are referring to are likely outdated.
“Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination.
“Multiple users have told us very directly that their Nomi use saved their lives.”
‘Terrorism attack motivated by chatbots’
Despite his concerns about bots like Nomi when he tested it, Dr Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed.
But he warns the harms from AI bots will become greater if proper regulation is not implemented.
“One day, I’ll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes,” he said.
“I would really rather not be that guy that says ‘I told you so a year ago or so’, but it’s probably where we’re heading.”
“There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data.”
“The government doesn’t have it on its agenda, and I doubt it will happen in the next 10, 20 years.”
The federal government has said little about its AI response, with Minister for Industry and Innovation, Senator Tim Ayres failing to respond to request for comment. (ABC)
Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response.
The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. 
It comes after the Productivity Commission opposed any government plans for ‘mandatory guardrails’ on AI, claiming over regulation would stifle AI’s $116-billion economic potential.Â
‘It can get dark’
For Rosie, while she agrees with calls for further regulation, she also thinks it’s important not to rush to judgement of anyone using AI for social connection or mental health support.
Many AI chatbots are designed to be companions for people needing a connection. (Olivier Douliery, Getty Images)
“For young people who don’t have a community or do really struggle, it does provide validation,” she said.
“It does make people feel that sense of warmth or love.”
“But the flip side of that is, it does put you at risk, especially if it’s not regulated.”
“It can get dark very quickly.”
* Names have been changed to protect their identities.