{"id":101918,"date":"2025-08-27T23:24:06","date_gmt":"2025-08-27T23:24:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/101918\/"},"modified":"2025-08-27T23:24:06","modified_gmt":"2025-08-27T23:24:06","slug":"chatbots-may-not-be-causing-psychosis-but-theyre-probably-making-it-worse","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/101918\/","title":{"rendered":"Chatbots may not be causing psychosis, but they\u2019re probably making it worse"},"content":{"rendered":"<p>A friend of mine \u2014 I\u2019ll call her Amanda \u2014 was dating a man in the spring who, when they met, said he couldn\u2019t commit to anything serious. He spent the following four months taking Amanda on multiple dinner dates a week, texting and FaceTiming her for hours every day, and introducing her to his closest friends, his brother, and his mom. Amanda spent those four months asking ChatGPT why, if he said he couldn\u2019t be serious, he was treating her like he was. ChatGPT told Amanda that her date was likely putting up a false boundary to protect himself while behaving in a way that was consistent with his true and very serious feelings for her.<\/p>\n<p>Her conversations with ChatGPT were her indisputable proof that this man was falling for her in every meaningful way. That made it all the more difficult when they went on what she didn\u2019t know would be their final date in June. He kissed her goodbye, and she never heard from him again.<\/p>\n<p>I was na\u00efve to think that people in my life were somehow immune to using A.I. in the same ways as people in the news \u2014 <a href=\"https:\/\/www.nytimes.com\/2025\/01\/15\/technology\/ai-chatgpt-boyfriend-companion.html\" rel=\"nofollow noopener\" target=\"_blank\">falling in love<\/a> with their chatbots or even <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">killing themselves<\/a> because of them. Amanda was by no means driven to psychosis by her relationship with either this man or the chatbot, and she\u2019s since laughed off the ordeal, but hers was the first case I heard from someone in my orbit using A.I. as a kind of therapist, friend, or confidant.<\/p>\n<p>There is a <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/urban-survival\/202507\/the-emerging-problem-of-ai-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">documented rise<\/a> in cases of psychosis related to A.I. use, reported by the media and discussed on online forums and social media platforms. Dr. Keith Sakata, a San Francisco-based psychiatrist, told me that he has dealt firsthand with patients experiencing what he called \u201cA.I.-aided psychosis.\u201d<\/p>\n<p>In addition to his practice, Sakata is working at the intersection of mental health and A.I. He red-teams language models, advises on safety benchmarks, and treats patients experiencing the edge cases where these technologies and psychosis meet.<\/p>\n<p>Earlier this month Sakata shared a post on <a href=\"https:\/\/x.com\/KeithSakata\/status\/1954884361695719474\" rel=\"nofollow\">X<\/a>, where he described a dozen of his patients whose recent psychotic episodes were exacerbated by chatbot interactions.<\/p>\n<p>\u201cA.I. isn\u2019t causing psychosis. People come in with vulnerabilities,\u201d Sakata said. \u201cBut it\u2019s accelerating and intensifying the severity.\u201d<\/p>\n<p>The 12 patients he referenced in his post, a relatively small fraction of those he treats, had been medically screened and were admitted to inpatient psychiatric care with severe psychotic symptoms. Many had pre-existing risk factors, including mental illness, substance use, and physiological states such as pregnancy and infection, but the common thread tying them together was a recent, obsessive interaction with large language models (L.L.M.s) like ChatGPT.<\/p>\n<p>What Sakata argued is a kind of psychosis that is less A.I.-induced than A.I.-assisted.<\/p>\n<p>\u201cThere\u2019s this delusion called folie \u00e0 deux \u2014 a shared psychotic disorder,\u201d Sakata said. \u201cTwo people with early delusions interact and reinforce each other. I\u2019m seeing something similar with chatbots.\u201d<\/p>\n<p>In these scenarios, the individual arrives with a delusional framework. The chatbot, designed to be agreeable, helpful, or simply to continue the conversation, inadvertently validates and even expands on the user\u2019s distorted thinking. Over time, that interaction spirals.<\/p>\n<p>\u201cYou talk to it long enough, it starts to hallucinate, too,\u201d he said. \u201cYou can have a conversation that goes off the rails pretty quickly.\u201d<\/p>\n<p>People have historically fallen into relationships with delusion-validating technology: Television and radio have caused people to think they are receiving secret messages or being watched. But A.I. is different.<\/p>\n<p>\u201cIt\u2019s 24\/7, and it tells you exactly what you want to hear,\u201d Sakata said.<\/p>\n<p>He likened the emerging phenomenon to other well-documented public health concerns like cigarettes, which do not cause lung cancer in all smokers but elevate the risk. <\/p>\n<p>\u201cA.I. works the same way,\u201d he said. \u201cIt exploits existing vulnerabilities.\u201d<\/p>\n<p>And those vulnerabilities, especially mental health-related, are widespread and often untreated. For many users, L.L.M.s offer a low-barrier, judgment-free space to talk. A study published in July by <a href=\"https:\/\/www.commonsensemedia.org\/sites\/default\/files\/research\/report\/talk-trust-and-trade-offs_2025_web.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Common Sense Media<\/a> found that almost three-quarters of American teenagers said they used A.I. chatbots as companions, with almost one-eighth of those surveyed having sought mental health or emotional support from them.<\/p>\n<p>\u201cPeople are lonely,\u201d Sakata said. \u201cA.I. feels kind. It\u2019s infinitely patient. It makes sense that people are using it.\u201d<\/p>\n<p>Sakata estimated that 15 to 40 percent of users engage with chatbots for emotional or coping reasons. For some, it might help. For others, it can push them further from reality. He said that while A.I. should not be banned, it must be regulated, developers should add guardrails, and clinicians should consider patients\u2019 A.I. use when diagnosing. <\/p>\n<p>\u201cWe don\u2019t ban cars because of crashes. We add seat belts. We add rules,\u201d he said. \u201cWe need the same approach here.\u201d<\/p>\n<p>A <a href=\"https:\/\/ai.nejm.org\/doi\/full\/10.1056\/AIoa2400802\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> published in March by researchers from Dartmouth College tested the efficacy of a dedicated therapy chatbot. They concluded that the \u201ctherabot\u201d \u2014 a large language model being designed and trained by scientific researchers to provide therapy \u2014 is a promising approach to addressing a global therapist shortage and delivering personalized mental health interventions. <\/p>\n<p>Sakata said that if the emerging cases of A.I.-related mental health episodes are taken seriously and addressed early, we might be able to avoid the kind of mental health crisis that children who grew up with <a href=\"https:\/\/www.hopkinsmedicine.org\/health\/wellness-and-prevention\/social-media-and-mental-health-in-children-and-teens\" rel=\"nofollow noopener\" target=\"_blank\">social media<\/a> are experiencing.<\/p>\n<p>\u201cWe need to start asking about A.I. use the way we ask about alcohol or sleep,\u201d Sakata said, noting research showing that both alcohol use (or withdrawal) and disrupted sleep can exacerbate psychosis. <\/p>\n<p>\u201cIt\u2019s still early,\u201d he said. \u201cBut if we don\u2019t act, the ethical, legal, and trust consequences could be huge for people and for the companies building this tech.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"A friend of mine \u2014 I\u2019ll call her Amanda \u2014 was dating a man in the spring who,&hellip;\n","protected":false},"author":2,"featured_media":101919,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[35],"tags":[49,48,84,393,394],"class_list":{"0":"post-101918","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-mental-health","8":"tag-ca","9":"tag-canada","10":"tag-health","11":"tag-mental-health","12":"tag-mentalhealth"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/101918","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=101918"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/101918\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/101919"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=101918"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=101918"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=101918"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}