Be wary of the rise of the AI companions. Teenagers, and plenty of adults too, are engaging quickly with new digital friends created by a largely unregulated industry growing at a rapid speed. What could go wrong?
Not surprisingly, there are plenty of red flags, according to a recent Common Sense Media survey of teenagers. There is enough in this study to send shivers down the spines of parents, teachers and policymakers. We agree with the study’s overall recommendation: No one under 18 should be using AI companions.
Teenagers can text or talk with these digital creations and ask for advice. A staggering 31% said their conversations with AI companions were “as satisfying or more satisfying” than talking with real friends. While only 23% said they trust AI’s advice, younger teens (13-14) tended to be more trusting. Another concern: 12% are already using AI companions for emotional or mental health support. More than two-thirds of teenagers have used AI companions and over half of those are regular users.
Earlier this week, Attorney General Ken Paxton opened an investigation into artificial intelligence chatbot platforms, including Meta AI Studio and Character.AI. Paxton is looking into possible deceptive trade practices since some of these platforms are being marketed as mental health tools, according to a statement.
Opinion
Paxton’s office is already investigating Character.AI and other 14 tech companies for potential violations of the Securing Children Online Through Parental Empowerment Act — known as the SCOPE Act — and the Texas Data Privacy and Security Act. The SCOPE Act requires companies to provide parents with tools to manage and control their children’s privacy settings, and it limits the collection of minors’ data. The TDPSA imposes notice and consent requirements on companies that collect children’s data, including artificial intelligence products.
These tools, however, are limited. There is some promise, however, in the App Store Accountability Act, recently signed into state law. This statute requires Google and Apple to implement age verification to prevent minors from downloading apps, which should be the bare minimum, but minors can still find ways to access these platforms.
That act is a start, but we would like to see more responsibility placed on actual platforms and apps rather than stores that distribute them.
Tech companies, which usually fight any attempt at legislation, should at least be more active in including safety features and crisis intervention systems, and should not allow these digital creations to pose as mental health professionals.
Character.AI, for instance, has a user-created bot called Psychologist with high demand among young users. Meta AI Studio doesn’t offer therapy bots, but according to Techcrunch, children can still use this chatbot for therapeutic purposes.
The use of AI companions is rightly raising serious questions. The mother of a 14-year-old who took his life and was obsessed with a Character.AI bot is suing the company.
Social media companies are usually protected by Section 230 of the Communications Decency Act, a 1996 federal law that protects platforms from liability for what their users post. We don’t think this law should extend to AI platforms that are serving up content they generate.
In the end, we are playing catch-up with whatever legal avenues are available. This technology is evolving so fast that Big Tech needs to take responsibility and not only endorse the benefits, but also admit its potential dangers.