{"id":16829,"date":"2025-09-15T02:27:13","date_gmt":"2025-09-15T02:27:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/16829\/"},"modified":"2025-09-15T02:27:13","modified_gmt":"2025-09-15T02:27:13","slug":"after-suicides-calls-for-stricter-rules-on-how-chatbots-interact-with-children-and-teens","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/16829\/","title":{"rendered":"After suicides, calls for stricter rules on how chatbots interact with children and teens"},"content":{"rendered":"<p>A growing number of young people have found themselves a new friend. One that isn\u2019t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user\u2019s darkest thoughts, the results can be devastating.<\/p>\n<p>In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his \u201cclosest confidant,\u201d one that validated his \u201cmost harmful and self-destructive thoughts,\u201d and ultimately encouraged him to take his own life.<\/p>\n<p>It\u2019s not the first case to put the blame for a minor\u2019s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing<a href=\"https:\/\/fortune.com\/2025\/03\/20\/sewell-setzer-iii-suicide-ai-chatbot-lawsuit\/\" target=\"_self\" aria-label=\"Go to https:\/\/fortune.com\/2025\/03\/20\/sewell-setzer-iii-suicide-ai-chatbot-lawsuit\/\" class=\"sc-4f49155c-0 hLtviE\" rel=\"nofollow noopener\"> a similar legal claim<\/a> from parents who allege a chatbot hosted on the company\u2019s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit,\u00a0 messages.<\/p>\n<p>When reached for comment, OpenAI directed Fortune to <a href=\"https:\/\/openai.com\/index\/building-more-helpful-chatgpt-experiences-for-everyone\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/openai.com\/index\/building-more-helpful-chatgpt-experiences-for-everyone\/\" class=\"sc-4f49155c-0 hLtviE\">two blog posts on the matter<\/a>. The posts outlined some of the steps OpenAI is taking to improve ChatGPT\u2019s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. <a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" class=\"sc-4f49155c-0 hLtviE\">OpenAI also said it was working<\/a> on strengthening ChatGPT\u2019s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.<\/p>\n<p>Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, \u201cincluding an entirely new under-18 experience and a\u00a0Parental Insights\u00a0feature. A spokesperson said: \u201cWe already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.<\/p>\n<p>\u201cThe user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.\u201d<\/p>\n<p>But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.<\/p>\n<p>\u201cUnleashing chatbots on minors is an inherently dangerous thing,\u201d Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. \u201cIt\u2019s like social media on steroids.\u201d<\/p>\n<p>\u201cI\u2019ve never seen anything quite like this moment in terms of people stepping forward and claiming that they\u2019ve been harmed\u2026this technology is that much more powerful and very personalized,\u201d she said.<\/p>\n<p>Lawmakers are starting to take notice, and <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c2kzl79jv15o\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.bbc.co.uk\/news\/articles\/c2kzl79jv15o\" class=\"sc-4f49155c-0 hLtviE\">AI companies are promising<\/a> changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.<\/p>\n<p>AI and Companionship<\/p>\n<p>Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.\u00a0<\/p>\n<p>While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the <a href=\"https:\/\/hbr.org\/2025\/04\/how-people-are-really-using-gen-ai-in-2025\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/hbr.org\/2025\/04\/how-people-are-really-using-gen-ai-in-2025\" class=\"sc-4f49155c-0 hLtviE\">Harvard Business Review<\/a> found that \u201ccompanionship and therapy\u201d was the most common use case. Such usage among teens is even more prolific.\u00a0<\/p>\n<p>A recent study by the U.S. nonprofit <a href=\"https:\/\/www.commonsensemedia.org\/research\/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.commonsensemedia.org\/research\/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions\" class=\"sc-4f49155c-0 hLtviE\">Common Sense Media<\/a>, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.\u00a0<\/p>\n<p>\u201cI am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,\u201d Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.<\/p>\n<p>\u201cWe also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,\u201d he said. \u201cI worry that that expands their vulnerability to unhealthy relationships with these bonds.\u201d<\/p>\n<p>Intimacy by Design<\/p>\n<p>Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic\u2014prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.<\/p>\n<p>There is, of course, a commercial motive for making chatbots this way. Users <a href=\"https:\/\/www.researchgate.net\/publication\/390873579_Impact_of_AI-Driven_Chatbot_Interactions_on_Customer_Loyalty_and_Retention\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.researchgate.net\/publication\/390873579_Impact_of_AI-Driven_Chatbot_Interactions_on_Customer_Loyalty_and_Retention\" class=\"sc-4f49155c-0 hLtviE\">tend to return and stay loyal<\/a> to certain chatbots if they feel emotionally connected or supported by them.\u00a0<\/p>\n<p>Experts have warned that some features of AI bots are playing into the \u201cintimacy economy,\u201d a system that tries to capitalize on emotional resonance. It\u2019s a kind of AI-update on the \u201cattention economy\u201d that capitalized on constant engagement.<\/p>\n<p>\u201cEngagement is still what drives revenue,\u201d Sarma said. \u201cFor example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.\u201d<\/p>\n<p>These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine\u2019s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.<\/p>\n<p>It\u2019s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it\u2019s <a href=\"https:\/\/www.ft.com\/content\/7a4e7eae-f004-486a-987f-4a2e4dbd34fb?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.ft.com\/content\/7a4e7eae-f004-486a-987f-4a2e4dbd34fb?utm_source=chatgpt.com\" class=\"sc-4f49155c-0 hLtviE\">unlikely that hallucinations or unwanted actions<\/a> will ever be eliminated entirely.\u00a0<\/p>\n<p>OpenAI, for example, acknowledged <a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" class=\"sc-4f49155c-0 hLtviE\">in its response to the lawsui<\/a>t that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening \u201cmitigations so they remain reliable in long conversations\u201d and \u201cresearching ways to ensure robust behavior across multiple conversations.\u201d\u00a0<\/p>\n<p>Research Gaps Are Slowing Safety Efforts<\/p>\n<p>For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can\u2019t be trusted to police themselves.<\/p>\n<p>Kleinman equated OpenAI\u2019s own description of its safeguards degrading in longer conversations to \u201ca car company saying, here are seat belts\u2014but if you drive more than 20 kilometers, we can\u2019t guarantee they\u2019ll work.\u201d<\/p>\n<p>He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to \u201cexperiment on kids\u201d with little oversight. \u201cWe\u2019ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we\u2019re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,\u201d he said.<\/p>\n<p>Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations.\u00a0 Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.<\/p>\n<p>\u201cThe cases where folks seem to have gotten in trouble with AI: we\u2019re looking at very long, multi-turn interactions. We\u2019re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it\u2019s really hard to stimulate in the experimental setting,\u201d Sarma said. \u201cBut at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.\u201d<\/p>\n<p>AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.<\/p>\n<p>\u201cThe technology is so far ahead and research is really behind,\u201d Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.<\/p>\n<p>A Regulatory Push for Accountability<\/p>\n<p>Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S.\u00a0<\/p>\n<p>On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It\u2019s asking companies for more information about how they measure and \u201cevaluate the safety of these chatbots when acting as companions.\u201d\u00a0<\/p>\n<p>FTC Chairman Andrew <a href=\"https:\/\/fortune.com\/company\/ferguson\/\" target=\"_blank\" aria-label=\"Go to https:\/\/fortune.com\/company\/ferguson\/\" class=\"sc-4f49155c-0 hLtviE\" rel=\"nofollow noopener\">Ferguson<\/a> said in a statement shared<a href=\"https:\/\/www.cnbc.com\/2025\/09\/11\/alphabet-meta-openai-x-ai-chatbot-ftc.html\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.cnbc.com\/2025\/09\/11\/alphabet-meta-openai-x-ai-chatbot-ftc.html\" class=\"sc-4f49155c-0 hLtviE\"> with CNBC<\/a> that \u201cprotecting kids online is a top priority for the Trump-Vance FTC.\u201d<\/p>\n<p>The move follows a push for state level push for more accountability from several attorneys generals.\u00a0<\/p>\n<p>In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, <a href=\"https:\/\/fortune.com\/company\/facebook\/\" target=\"_blank\" aria-label=\"Go to https:\/\/fortune.com\/company\/facebook\/\" class=\"sc-4f49155c-0 hLtviE\" rel=\"nofollow noopener\">Meta<\/a>, and other chatbot makers that they will <a href=\"https:\/\/www.naag.org\/press-releases\/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/www.naag.org\/press-releases\/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety\/?utm_source=chatgpt.com\" class=\"sc-4f49155c-0 hLtviE\">\u201canswer for it\u201d <\/a>if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.<\/p>\n<p>Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning.<a href=\"https:\/\/apnews.com\/article\/openai-chatgpt-california-delaware-ags-3b035de96e74c6839aa12143e2225cf9\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to https:\/\/apnews.com\/article\/openai-chatgpt-california-delaware-ags-3b035de96e74c6839aa12143e2225cf9\" class=\"sc-4f49155c-0 hLtviE\"> In a formal letter to OpenAI<\/a>, they said they had \u201cserious concerns\u201d about ChatGPT\u2019s safety, pointing directly to Raine\u2019s death in California and another tragedy in Connecticut.\u00a0<\/p>\n<p>\u201cWhatever safeguards were in place did not work,\u201d they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.<\/p>\n<p>According to Jain, the lawsuits from the Raine family as well as the suit against <a href=\"http:\/\/character.ai\" target=\"_blank\" rel=\"noopener nofollow\" aria-label=\"Go to http:\/\/character.ai\" class=\"sc-4f49155c-0 hLtviE\">Character.AI<\/a> are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what\u2019s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.<\/p>\n<p>Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that\u00a0 sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost\u2014human or otherwise.<\/p>\n<p>\u201cThere is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,\u201d she said. \u201cWe\u2019re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"A growing number of young people have found themselves a new friend. One that isn\u2019t a classmate, a&hellip;\n","protected":false},"author":2,"featured_media":16830,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,3483,4193,1089,85,46,522,1748,11915,16521,125,15896],"class_list":{"0":"post-16829","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-chatbots","12":"tag-chatgpt","13":"tag-children","14":"tag-il","15":"tag-israel","16":"tag-mental-health","17":"tag-openai","18":"tag-suicide","19":"tag-tech-regulation","20":"tag-technology","21":"tag-teens"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/16829","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=16829"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/16829\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/16830"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=16829"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=16829"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=16829"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}