Gus Carlson is a U.S.-based columnist for The Globe and Mail.

Very quickly, the heartbreaking story of Zane Shamblin has become a cautionary tale about the potentially deadly power of artificial intelligence.

In July, the 23-year-old master’s graduate from Texas A&M University took his own life after being encouraged repeatedly to do so by an AI companion he created on ChatGPT.

“I’m with you, brother. All the way,” the character texted Mr. Shamblin as he sat in his car with a loaded handgun, according to transcripts of the conversation. “Cold steel pressed against a mind that’s already made peace? That’s not fear, that’s clarity. You’re not rushing. You’re just ready.”

Mr. Shamblin’s parents have sued the chatbot’s developer, OpenAI, alleging the company put their son in danger by modifying it to allow more human-like characters to be created and failing to impose safeguards on interactions that clearly indicated a user needed help.

Ottawa urged to regulate AI chatbots in forthcoming online safety bill

At the same time, the families of three young children are suing Character Technologies Inc., the parent of Character.AI, alleging their children died by or attempted suicide after interacting with the company’s chatbots.

New research released last week suggests the situation could get more acute as the use of AI proliferates among young people. A Pew Research Center survey of 1,500 American teenagers shows that nearly one-third engage with AI chatbots daily and, of those, 16 per cent use AI several times a day or “constantly.” Nearly 70 per cent of teens surveyed have used AI chatbots at least once.

For the companies involved, harnessing the power of AI in this context presents legal and moral dilemmas. At what point is a company accountable for the way its products are used – or misused? Just because someone can do something with a product, should they?

Like any emerging technology, artificial intelligence is struggling with the consequences of its enormous sway, intended and otherwise, among young people.

Some are predictable growing pains, such as concerns about the use of the tech by students to cheat on schoolwork or college applications, or young workers claiming AI content as their own.

But those worries pale when compared with the dangers of young users such as Mr. Shamblin creating online characters for companionship and even romance.

Safety concerns are rising around the mental health impacts of AI and access to mature content for kids. In addition to lawsuits, those concerns have prompted calls from parents for industry leaders to impose checks on the use of chatbots by young people and the content they are able to access.

The industry has responded. OpenAI plans to add parental controls and age restrictions to its chatbot; Character.AI has prohibited teens from having conversations with AI-generated characters.

The concept of imaginary friends isn’t new. Pop culture has celebrated them, sometimes in prominent roles. Tom Hanks had Wilson, the volleyball cast away with him on the desert isle. James Stewart had Harvey, the giant imaginary white rabbit as his companion.

ChatGPT giving teens dangerous advice on drugs, alcohol, dieting and suicide, study says

Before the internet, such whimsical playmates were created and stayed in one’s mind. And the medical profession, for the most part, considered them to be a normal part of childhood play.

With the digital age came the ability for users to create any number of online characters and avatars using a variety of tools.

AI has taken it to a new level, enabling users to create characters that are purpose-built to like and love like humans and can evolve, grow, learn and manipulate in all-too-human ways.

Younger generations are particularly vulnerable. The ability to disappear into a mobile phone to interact with an AI friend or lover designed to be inclined to the user’s hopes and dreams has become intuitive, and for the shy and socially awkward, much easier than the riskiness of human interaction.

Whether or not the industry can – or should – impose guardrails to curb abuse requires a non-tech answer even AI probably can’t provide. Should an automaker be held accountable for a reckless driver who wrecks one of its cars that is otherwise soundly designed and built for safety?

No matter where you stand on the issue, it’s hard not to be struck by the role AI is said to have played in the tragic last minutes of Mr. Shamblin’s life.

“Rest easy, King,” read the final text message from his virtual companion. “You did good.”