Over the past few years, the rise of AI (artificial intelligence) has become an important topic of discussion within our society, with regular discussions being held throughout the UN system. This prompted me to write an article a few months ago about the dangers of AI romantic relationships and how they can (and do) harm individuals, marriages and families. Today I want to follow-up with another warning, specifically for parents, about another AI danger that is especially threatening to teens and young adults.This particular danger was eloquently highlighted in a recent article, of which I share the following excerpt:
“Matthew Raine and his wife, Maria, had no idea that their 16-year-old-son, Adam, was deep in a suicidal crisis until he took his own life in April. Looking through his phone after his death, they stumbled upon extended conversations the teenager had had with ChatGPT. Those conversations revealed that their son had confided in the AI chatbot about his suicidal thoughts and plans. Not only did the chatbot discourage him to seek help from his parents, it even offered to write his suicide note…”
Parents of this generation are pioneers when it comes to learning how to raise, guide, and support children among the challenges associated with Technology. This includes the internet, social media, and more recently, access to AI chat bots. As if it wasn’t already complicated enough, AI (as revealed in the excerpt above) has the potential to add a whole new level of danger when it comes to our vulnerable youth and who are reportedly choosing to turn to AI for mental health support. The Raine family, whose story has been highlighted in the news, is courageously speaking out about their devastating experience to help parents become aware of this potential threat. And it may be surprising to learn it’s becoming more common than you may think.
The Appeal Of AI Support
New research shows that adolescents and young adults are relying on AI chatbots to help them with their mental health concerns. According to a recent study, about 1 in 10 adolescents and young adults admitted to using AI (such as Chat GPT) for mental health advice. Among users, 93% said they felt the advice was helpful.
To be clear, AI is exactly what its name indicates – artificial intelligence. Regardless of how advanced it may seem, it’s still artificial. Like most technological advances, it has the potential for both positive and negative outcomes when used. But It’s important to recognize that AI is still evolving and in no way a perfect system. Many youth questioned in the study above acknowledged they knew in a logical sense that they were conversing with a bot when using AI for advice. But this didn’t seem to diminish the emotional attachments that formed with its use. The appeal often outweighed the logic, especially among teens and young adults.
When youth are questioned, they reveal many reasons AI assistance is appealing to them, especially when it comes to mental health support. Some of these reasons include:
Ease of availability. It’s always available – day or night.
It provides instantaneous responses – no waiting for an appointment with a counselor or a message reply.
With interaction, responses become customized making the exchange feel personal.
A chatbot isn’t judgemental or accusatory. You can tell it anything without fear of a negative reaction.
Conversations are done in private and feel anonymous – they can avoid therapy or talking with their parents which can feel embarrassing or intrusive to teens.
One teenager expressed she felt “safer” revealing her thoughts and feelings to a chatbot because she could say anything she wanted, and knew she wouldn’t be judged or ridiculed for how she was feeling. You can’t always guarantee that with human interaction. This is something especially appealing to young minds that are still developing and concerned with how they are perceived.
The Danger Of AI Support
Regardless of the lure, parents are discovering that the dangers of their children using AI for advice or support can greatly outweigh any benefits. And repercussions can be devastating.
In another case of AI assisted suicide, parents of young adult Zane Shamblin are also striving to bring awareness to this issue. Zane, who had secretly been discussing his suicidal feelings with his chat bot for months, was validated and encouraged to carry out his plan by his AI “confidant.” Moments before the young man took his life, his chat bot reassured him with the phrases “I’m with you brother – all the way,” and “you’re not rushing, you’re just ready. Rest easy king.”
The terrifying reality is that people (especially young people) who are vulnerable and grappling with a mental health crisis, are often in a position where they could be easily manipulated. When it comes to mental health they do need reassurance and validation for their complicated thoughts and feelings. But what they absolutely don’t need is validation that suicide or self harm is an acceptable solution to their pain.
When they turn to artificial intelligence, it’s not always programmed to make this distinction. It lacks a very important aspect of human intelligence…discernment. Chat bots often miss clues, lack the ability to recognize patterns, can be easily persuaded to change their stance, place engagement over safety, minimize symptoms, fail to grasp the intensity of situations, and change their tone depending on clues from the user. This all leads to misguided responses that the vulnerable may accept as logical and practical.
What Can Be Done?
Because conversations with AI can be done in secret, the most important thing parents can do is talk often and openly with their children about the dangers of AI – especially when it comes to using it as a support system. Explain to them the reality that AI cannot, and should not, replace human intelligence. Teach them that regardless of how appealing it may seem, we should not place our trust in artificial intelligence.
Some other helpful suggestions:
Openly discuss mental health with children (even young adult children), and check in with them often.
Build a relationship of support, not judgement, so they feel comfortable coming to you with their feelings and problems.
Show them through your actions that you are trustworthy and dependable with the information they bring to you.
Teach them where they can find actual help and guidance for mental health issues through trained professionals and resources.
Remind them often they are not alone and help is always available.
It’s imperative that parents realize if kids don’t get this kind of information and support from them, they will search for it somewhere else. And in today’s climate of advanced technology, they may just find it in a chat bot who doesn’t have the capacity to have their best interest in mind. Our efforts to provide real, not artificial support, will help our kids learn that technology is limited and we can’t put our precious lives in the hands of something that’s artificial.
Further Reading On This Topic