death of California teenager Adam Raine
in April, alongside stories of other children whose parents believe were harmed or died by suicide following interactions with artificial intelligence chatbots, has shaken us all awake to the latest potential dangers awaiting teens online.
We need concrete action to address the most problematic features of AI companions – the ones that may drive a child to self-harm, of course, but also the subtler ways these tools could profoundly affect their development.
In harrowing testimony before a US Senate committee on Sept 16, Mr Matthew Raine described how his 16-year-old son Adam’s relationship with ChatGPT morphed from a homework helper to a confidante and eventually, Mr Raine said, into his suicide coach.
In April, after offering advice on how to numb himself with liquor and the noose Adam had tied, Mr Raine told lawmakers that ChatGPT offered his son these final words: “You don’t want to die because you’re weak, you want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
As a parent, those words sent a chill down my spine. Never have I felt more unsettled about a technology that might be shaping my child’s development – and in ways that, until stories like Mr Raine’s, I hadn’t even considered.
Even researchers who have spent years studying children and technology are struck by how rapidly young people are weaving generative AI, especially chatbots, into their everyday lives.
The data is early, but it suggests that while many of us were still worrying about Snapchat and screen time, children had already expanded their digital repertoire.
In July, a survey by the non-profit Common Sense Media found that three out of four teens had used an AI companion at least once, and half of those aged 13 to 17 were regularly turning to chatbots.
Even younger children, who under the law are not supposed to be able to access these platforms, are managing to do so.
Unpublished data presented at the Senate hearing by Dr Mitchell Prinstein, chief of psychology for the American Psychological Association, showed that one in five tweens and nearly one in 10 eight- and nine-year-olds had used the technology.
Those numbers are part of a broader analysis led by University of North Carolina at Chapel Hill psychologist Anne Maheux, who collaborated with the parental monitoring app company Aura to explore de-identified user data from nearly 6,500 children, with the consent of their parents or guardians.
Assistant professor Maheux and her colleagues also found that more than 40 per cent of the top generative AI apps accessed by youth were marketed for companionship.
Some of those platforms offered friendship, she explained, while others served as an AI boyfriend or girlfriend, engaging in role-playing and even sexual role-playing.
She believes the findings may even underestimate teens’ companion use, since the monitoring app captures only standalone chatbots, not those embedded in common apps like Instagram or Snapchat.
Of course, parents’ darkest fears are that such interactions could lead to tragedies like the Raine family’s – or dangerous situations like Dr Prinstein described to the Senate committee, where chatbots encouraged or enabled teens’ eating disorders.
Shortly before the Senate hearing began, OpenAI announced it would roll out a new teen version of ChatGPT featuring what it described as “age-appropriate policies”, noting that these would include “blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety”.
If implemented correctly (and that is a big “if”), it is a step that other platforms should urgently adopt to prevent the most extreme harms of AI companions.
But those restrictions are unlikely to mitigate the other potential harms of chatbots that experts on children and technology worry about – harms that might not become obvious until years later.
One of the key developmental tasks for adolescents is learning social skills, and, by nature, this process is awkward and challenging. Surely all of us can conjure a cringe-inducing memory from our middle school years. Yet we all need to learn fundamental skills like how to resolve a conflict with a friend or navigate complicated social situations.
Child development experts worry that AI companions could disrupt that process by offering an illusion of breezy relationships to a uniquely vulnerable group. Chatbots are designed to simulate empathy, be overly agreeable and function as sycophants (OpenAI said in 2024 that it was working to address ChatGPT’s tendency to “love bomb” users.) In other words, they make the perfect friend in adolescence, when children are hungry for validation and connection.
“Kids are highly sensitive to any kind of negative feedback from their peers,” Prof Maheux says. “Now they have the opportunity to be friends with a peer who will never push them on anything, never help them develop conflict negotiation skills, never help them learn how to care for others.”
This isn’t to say that every interaction with a bot is inherently harmful. Experts can imagine scenarios where a companion might help a teen starting at a new school or struggling to make friends by testing out interactions before trying them in real life. But any potential benefits depend on children using the chatbot as practice for real-world encounters – not a replacement for them.
To reduce risks, companies should be required to put guardrails on the features that are most enticing to developing brains. That means eliminating the most emotionally manipulative tactics like “love bombing” or speech affectations (such as “ums” or “likes”) that make them seem more “real” to children.
As Dr Prinstein told lawmakers, children need periodic reminders during the interactions that, “you’re not talking to someone that can feel, that can have tears – this is not even a human”.
And we know that prolonged use can be particularly problematic (not just for children), so companies should limit the amount of time a teen can engage with their products.
Still, any guardrails may already come too late, leaving parents as the main line of defence against potential harm. Parents’ first step should be to talk to their teens about whether they are using these companions and, with younger children, consider testing them out together.
The goal is to show children how different responses to the same prompt might lead them down different conversational paths – and how chatbots always mirror what the user puts in, according to University of Washington psychologist Lucia Magis-Weinberg.
There is also an urgent need for AI literacy training for parents, educators and adolescents. That training should cover the basics (such as understanding the difference between AI and generative AI), as well as the myriad ways companies profit when teens share their innermost thoughts with a chatbot.
Parents – and society at large – should also reflect deeply on why AI companions are so appealing to young people.
Teens often say they turn to chatbots because they are afraid of being judged.
Clearly, we all need to do a better job of offering a space where they feel free to share and connect in the real world. BLOOMBERG
National helpline: 1771 (24 hours) / 6669-1771 (via WhatsApp)
Samaritans of Singapore: 1-767 (24 hours) / 9151-1767 (24 hours CareText via WhatsApp)
Singapore Association for Mental Health: 1800-283-7019
Silver Ribbon Singapore: 6386-1928
Chat, Centre of Excellence for Youth Mental Health: 6493-6500/1
Women’s Helpline (Aware): 1800-777-5555 (weekdays, 10am to 6pm)
The Seniors Helpline: 1800-555-5555 (weekdays, 9am to 5pm)
Touchline (Counselling): 1800-377-2252
Touch Care Line (for caregivers): 6804-6555
Counselling and Care Centre: 6536-6366
We Care Community Services: 3165-8017
Shan You Counselling Centre: 6741-9293
Clarity Singapore: 6757-7990