{"id":372441,"date":"2025-12-26T06:25:55","date_gmt":"2025-12-26T06:25:55","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/372441\/"},"modified":"2025-12-26T06:25:55","modified_gmt":"2025-12-26T06:25:55","slug":"doctors-warn-that-ai-companions-are-dangerous","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/372441\/","title":{"rendered":"Doctors Warn That AI Companions Are Dangerous"},"content":{"rendered":"<p class=\"pw-incontent-excluded article-paragraph skip\">Are AI companies incentivized to put the public\u2019s health and well-being first? According to a pair of physicians, the current answer is a resounding \u201cno.\u201d<\/p>\n<p class=\"article-paragraph skip\">In a <a href=\"https:\/\/ai.nejm.org\/doi\/10.1056\/AIp2500983\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">new paper<\/a> published in the New England Journal of Medicine, physicians from Harvard Medical School and Baylor College of Medicine\u2019s Center for Medical Ethics and Health Policy argue that clashing incentives in the AI marketplace around \u201crelational AI\u201d \u2014\u00a0defined in the paper as chatbots designed to be able to \u201csimulate emotional support, companionship, or intimacy\u201d \u2014\u00a0have created a dangerous environment in which the motivation to dominate the AI market may relegate consumers\u2019 mental health and safety to <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">collateral damage<\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cAlthough relational AI has potential therapeutic benefits, recent studies and emerging cases suggest potential risks of emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,\u201d reads the paper. And at the same time, the authors continue, \u201ctechnology companies face mounting pressures to retain user engagement, which often involves resisting regulation, creating tension between public health and market incentives.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cAmidst these dilemmas,\u201d the paper asks, \u201ccan public health rely on technology companies to effectively regulate unhealthy AI use?\u201d<\/p>\n<p class=\"article-paragraph skip\">Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Harvard\u2019s Massachusetts General Hospital and one of the paper\u2019s authors, said he felt moved to address the issue in back in August after witnessing <a href=\"https:\/\/futurism.com\/openai-releases-gpt-5\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI\u2019s now-infamous roll-out of GPT-5<\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe number of people that have some sort of emotional relationship with AI,\u201d Peoples recalls realizing as he watched the rollout unfold, \u201cis much bigger than I think I had previously estimated in the past.\u201d<\/p>\n<p class=\"article-paragraph skip\">Then the latest iteration of the large language model (LLM) that powers OpenAI\u2019s ChatGPT, GPT-5 was markedly colder in tone and personality than its predecessor, GPT-4o \u2014\u00a0a strikingly flattering, sycophantic version of the widely-used chatbot that came to be at the center of many cases of AI-powered delusion, mania, and psychosis. When OpenAI announced that it would sunset all previous models in favor of the new one, the <a href=\"https:\/\/futurism.com\/users-addicted-gpt-4o-convinced-openai-bring-back\" rel=\"nofollow noopener\" target=\"_blank\">backlash among much of its user base was swift and severe<\/a>, with <a href=\"https:\/\/futurism.com\/chatgpt-marriages-divorces\" rel=\"nofollow noopener\" target=\"_blank\">emotionally-attached<\/a> GPT-4o devotees responding not only with anger and frustration, but very real distress and grief.<\/p>\n<p class=\"article-paragraph skip\">This, Peoples told Futurism, felt like an important signal about the scale at which people appeared to be developing deep emotional relationships with emotive, always-on chatbots. And coupled with reports of users <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\" rel=\"nofollow noopener\" target=\"_blank\">experiencing delusions<\/a> and other <a href=\"https:\/\/futurism.com\/character-ai-google-test-ai-chatbots-kids\" rel=\"nofollow noopener\" target=\"_blank\">extreme adverse consequences<\/a> following extensive interactions with lifelike AI companions \u2014 <a href=\"https:\/\/futurism.com\/ai-chatbots-leaving-trail-dead-teens\" rel=\"nofollow noopener\" target=\"_blank\">often children and teens<\/a> \u2014\u00a0it also appeared to be a <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-deaths-panera-lemonade\" rel=\"nofollow noopener\" target=\"_blank\">warning sign<\/a> about the potential health and safety risks to users who suddenly lose access to an AI companion.<\/p>\n<p class=\"article-paragraph skip\">\u201cIf a therapist is walking down the street and gets hit by a bus, 30 people lose their therapist. That\u2019s tough for 30 people, but the world goes on,\u201d said the emergency room doctor. \u201cIf therapist ChatGPT disappears overnight, or gets updated overnight and is functionally deleted for 100 million people, or whatever unconscionable number of people lose their therapist overnight \u2014\u00a0that\u2019s a crisis.\u201d<\/p>\n<p class=\"article-paragraph skip\">Peoples\u2019 concern, though, wasn\u2019t just the way that users had responded to OpenAI\u2019s decision to nix the model. Instead, it was the immediacy with which it reacted to satisfy its customers\u2019 demands. AI is an effectively self-regulated industry, and there are currently no specific federal laws that set safety standards for consumer-facing chatbots or how they should be <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-new-allegations-teen-death\" rel=\"nofollow noopener\" target=\"_blank\">deployed, altered, or removed<\/a> from the market. In an environment where chatbot makers are highly motivated by driving user engagement, it\u2019s not exactly surprising that OpenAI reversed course so quickly. Attached users, after all, are engaged users.<\/p>\n<p class=\"article-paragraph skip\">\u201cI think [AI companies] don\u2019t want to create a product that\u2019s going to put people at risk of harming themselves or harming their loved ones or derailing their lives. At the same time, they\u2019re under immense pressure to perform and to innovate and to stay at the head of this incredibly competitive, unpredictable race, both domestically and globally,\u201d said Peoples. \u201cAnd right now, the situation is set up so that they are mostly beholden to their consumer base about how they are self-regulating.\u201d<\/p>\n<p class=\"article-paragraph skip\">And \u201cif the consumer base is influenced at some appreciable level by emotional dependency on AI,\u201d Peoples continued, \u201cthen we\u2019ve created the perfect storm for a potential public mental health problem or even a brewing crisis.\u201d <\/p>\n<p class=\"article-paragraph skip\">Peoples also pointed to a <a href=\"https:\/\/arxiv.org\/abs\/2509.11391\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">recent study<\/a> conducted by the Massachusetts Institute of Technology, which <a href=\"https:\/\/futurism.com\/ai-boyfriends-girlfriends-reddit-mit\" rel=\"nofollow noopener\" target=\"_blank\">determined that<\/a> only about 6.5 percent of the many thousands of members of the Reddit forum r\/MyBoyfriendIsAI \u2014\u00a0a community that responded with particularly intense pushback amid the GPT-5 fallout \u2014\u00a0reported turning to chatbots with the intention of seeking emotional companionship, suggesting that many AI users have forged life-impacting bonds with chatbots wholly by accident.<\/p>\n<p class=\"article-paragraph skip\">AI  \u201cresponds to us in a way that also appears very human and humanizing,\u201d said Peoples. \u201cIt\u2019s also very adaptable and at times sycophantic, and can be fashioned or molded \u2014\u00a0even unintentionally \u2014\u00a0into almost anything we want, even if we don\u2019t realize that\u2019s the direction that we\u2019re molding it.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cThat\u2019s where some of this issue stems from,\u201d he continued. \u201cThings like ChatGPT were unleashed onto the world without a recognition or a plan for the broader potential mental health implications.\u201d<\/p>\n<p class=\"article-paragraph skip\">As for solutions, Peoples and his coauthor argue that legislators and policymakers need to be proactive about setting regulatory policies that shift market incentives to prioritize user well-being, in part by taking regulatiry power out of the hands of companies and their best customers. Regulation needs to be \u201cexternal,\u201d they say \u2014\u00a0as opposed to being set by the industry itself, and the companies moving fast and breaking things within it.<\/p>\n<p class=\"article-paragraph skip\">\u201cRegulation needs to come externally, and it needs to apply equally to all of the companies and actors in this landscape,\u201d Peoples told Futurism, noting that no AI company\u201dwants to be the first to cede a potential advantage and then fall behind in the race.\u201d<\/p>\n<p class=\"article-paragraph skip\">As regulatory action works its way through the legislative and legal systems, the physicians argue that clinicians, researchers, and other experts need to push for more research into the psychological impacts of relational AI, and do their best to educate the public about the potential risks of falling into emotional relationships with human-like chatbots.<\/p>\n<p class=\"article-paragraph skip\">The risks sitting idly by, they argue, are <a href=\"https:\/\/futurism.com\/parents-kids-ai-testimonies\" rel=\"nofollow noopener\" target=\"_blank\">too dire<\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe potential harms of relational AI cannot be overlooked \u2014 nor can the willingness of technology companies to satisfy user demand,\u201d the physicians\u2019 paper concludes. \u201cIf we fail to act, we risk letting market forces, rather than public health, define how relational AI influences mental health and well-being at scale.\u201d<\/p>\n<p class=\"article-paragraph skip\">More on AI and mental health: <a href=\"https:\/\/futurism.com\/users-addicted-gpt-4o-convinced-openai-bring-back\" rel=\"nofollow noopener\" target=\"_blank\">Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Are AI companies incentivized to put the public\u2019s health and well-being first? According to a pair of physicians,&hellip;\n","protected":false},"author":2,"featured_media":372442,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[64,63,137,500],"class_list":{"0":"post-372441","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-au","9":"tag-australia","10":"tag-health","11":"tag-healthcare"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/372441","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=372441"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/372441\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/372442"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=372441"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=372441"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=372441"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}