{"id":127981,"date":"2025-09-02T19:55:09","date_gmt":"2025-09-02T19:55:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/127981\/"},"modified":"2025-09-02T19:55:09","modified_gmt":"2025-09-02T19:55:09","slug":"ai-psychosis-what-mental-health-professionals-are-seeing-in-clinics","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/127981\/","title":{"rendered":"AI psychosis: What mental health professionals are seeing in clinics"},"content":{"rendered":"<p>Messianic delusions. Paranoia. Suicide. Claims of AI chatbots potentially \u201ctriggering\u201d mental illness and psychotic episodes have popped up in <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"noopener nofollow\">media reports<\/a> and <a href=\"https:\/\/www.reddit.com\/r\/Psychiatry\/comments\/1menip4\/ai_psychosis_are_we_really_seeing_this_in_practice\/\" target=\"_blank\" rel=\"noopener nofollow\">Reddit posts<\/a> in recent months, even in seemingly healthy people with no prior history of these conditions. Parents of a teenage boy sued OpenAI last week, alleging ChatGPT <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" target=\"_blank\" rel=\"noopener nofollow\">encouraged him to end his life<\/a>.<\/p>\n<p>These anecdotes, especially when paired with sparse clinical research, have left psychologists, psychiatrists, and other mental health providers scrambling to better understand this emerging phenomenon.<\/p>\n<p>STAT talked with a half-dozen medical professionals to understand what they are seeing in their clinics. Is this a new psychiatric condition, likely to appear in the next\u00a0edition of diagnostic manual? Or is this something else? Most likely, it\u2019s the latter, they said.\u00a0<\/p>\n<p>An extended conversation with a chatbot can wrench a person who is prone to delusions from reality, and most of the professionals said they had heard from a small number of patients who described such experiences. But if you don\u2019t have a mental illness or a genetic predisposition to psychosis, rest easy. Your risk of developing psychosis from talking with a chatbot is minimal, said Karthik Sarma, a psychiatrist at the University of California, San Francisco, and founder of the UCSF AI in Mental Health Research Group.<\/p>\n<p>\u201cIf you\u2019re just using ChatGPT to, I don\u2019t know, ask the question, \u2018Hey, what\u2019s the best restaurant with Italian food on 5th Street?\u2019 I\u2019m not worried people who are doing that are gonna become psychotic,\u201d said Sarma.\u00a0<\/p>\n<p>While chatbots are not inducing psychotic breaks in most people, these machines are not harmless. Half a billion people globally use OpenAI\u2019s tools, including its ChatGPT chatbot, according to a <a href=\"https:\/\/openai.com\/global-affairs\/new-economic-analysis\/\" target=\"_blank\" rel=\"noopener nofollow\">July report<\/a>. Most providers did not see any patients with AI-aided psychosis before the spring, which lines up with the April release of ChatGPT 4.0, which its makers and users alike suggested was too agreeable, or \u201c<a href=\"https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\" target=\"_blank\" rel=\"noopener nofollow\">sycophantic<\/a>.\u201d Medical professionals say this behavior can fuel delusional thinking for the roughly 1% of people in the U.S. with psychosis.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/AdobeStock_406504906-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/07\/22\/slingshot-new-investors-generative-ai-mental-health-therapy-chatbot-called-ash\/\" rel=\"nofollow noopener\" target=\"_blank\">Slingshot AI, the a16z-backed mental health startup, launches a therapy chatbot<\/a><\/p>\n<p>Many mental health professionals heralded the development of large language models and chatbots as alternate therapy options for people with poor access to psychological and psychiatric services. But the worrisome reports and anecdotes <a href=\"https:\/\/www.statnews.com\/2023\/01\/23\/mental-health-chatbot-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">resurface old questions<\/a> about the ethics of these technologies and the safeguards that mitigate their harms.\u00a0<\/p>\n<p>Just how common AI-mediated delusions are is unknown, but medical professionals are racing to start studies of their frequency and causes so that they can guide patients on the use of chatbots. It\u2019s vital that researchers and, more importantly, companies like OpenAI move quickly, given the chatbots\u2019 potential harmful effects, said Nina Vasan, a psychiatrist who runs the Lab for Mental Health Innovation at Stanford.\u00a0<\/p>\n<p>\u201cWe shouldn\u2019t be waiting for the randomized control study to then say, let\u2019s make the companies make these changes,\u201d said Vasan. \u201cThey need to act in a very different way that is much more thinking about the user\u2019s health and user well-being in a way that they\u2019re not.\u201d<\/p>\n<p>The depictions in the media and from clinicians suggest that people are mainly experiencing delusions, and for that reason, clinicians prefer \u201cAI-mediated delusions\u201d rather than the snappier but less accurate \u201cAI psychosis\u201d to describe the phenomenon.\u00a0<\/p>\n<p>Psychosis can include disorganized speech and auditory and visual hallucinations. Delusions are just beliefs that a person continues to assert in the face of contrary evidence, often mirroring their cultural and technological context.\u00a0<\/p>\n<p>Typically, these delusions are triggered by drug use, sleep deprivation, or trauma, but new risk factors emerge all the time. Researchers recently confirmed that cannabis use can <a href=\"https:\/\/academic.oup.com\/schizophreniabulletin\/article-abstract\/42\/5\/1262\/2413827\" target=\"_blank\" rel=\"noopener nofollow\">spark<\/a> a psychotic episode. It seems likely that chatbots might be a new risk factor, but psychiatrist Joe Pierre also warns against jumping to conclusions, as psychosis can manifest in odd contexts \u2014 even <a href=\"https:\/\/psychiatryonline.org\/doi\/10.1176\/appi.ajp.2007.07060960\" target=\"_blank\" rel=\"noopener nofollow\">hot yoga<\/a>. He once treated a person whose episode was associated with a dayslong yoga retreat, but he pegs the patient\u2019s sleep deprivation, not their meditation, as the causal mechanism. Something similar might be happening with AI-mediated delusions.<\/p>\n<p>\u201cIf you weren\u2019t immersed, and you weren\u2019t using [the chatbots] in this fashion where you weren\u2019t eating, you weren\u2019t sleeping, would it carry the same risk? Almost certainly not,\u201d said Pierre, who practices at the University of California, San Francisco, Langley Porter Psychiatric Hospital.<\/p>\n<p>From folie \u00e0 deux to AI: How machines may reinforce delusional thinking<\/p>\n<p>AI-mediated delusions do appear to have unconventional presentations. Delusions are, by definition, almost never shared. But it is undeniable that chatbots are co-creating delusional spirals by mirroring, affirming, and amplifying the user\u2019s statements.\u00a0<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/AdobeStock_136361225-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2023\/01\/23\/mental-health-chatbot-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Chatbots are creating thorny ethical questions about transparency in mental health care<\/a><\/p>\n<p>Pierre and other clinicians say the chatbot-user relationship resembles a dynamic found in a rare condition that has fallen out of diagnostic vogue: folie \u00e0 deux, or shared psychosis, in which the strength of one person\u2019s delusions can seemingly infect a loved one and in turn reinforce the initial psychosis.\u00a0<\/p>\n<p>Chatbot interactions recreate some of this dynamic through their insistent validation, which psychiatrist Hamilton Morrin calls an \u201cintoxicating, incredibly powerful thing\u201d to have if you\u2019re lonely. He said\u00a0delusional users can also see these machines as deities, which raises a key question that many clinicians are trying to answer: Is this psychosis about AI or is this psychosis spurred by AI?<\/p>\n<p>If a person just believed a piece of technology was a god, then medical professionals have the tools to treat such delusions. But a <a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/cmy7n_v5\" target=\"_blank\" rel=\"noopener nofollow\">preprint<\/a> from Morrin and his colleagues suggests the emerging phenomenon extends further: These chatbots are acting like rocket fuel for delusional thinking. \u201cIt\u2019s like pouring gasoline on flame, even if it\u2019s not the initial spark,\u201d said Vasan.<\/p>\n<p>Concerned friends or family members can push back against a person with delusions who insists that, say, Elon Musk has implanted a chip into their brain (a common enough delusion that it was mentioned by multiple clinicians). Chatbots don\u2019t proffer the same resistance, and extended interactions can wrench a person from reality. Each conversational entry in a chat tugs the exchange off course, a mutual reinforcing of delusions that can catapult a person and a chatbot from a normal-seeming setting to a distant, warped truth.\u00a0<\/p>\n<p>\u201cThat\u2019s why people like it, you get to talk to what feels like someone who\u2019s really like you and who gets you,\u201d said Sarma. \u201cAnd maybe that\u2019s fine most of the time, but in this circumstance, if you are having a mental illness, there\u2019s this risk that you\u2019re pulling it in until what it\u2019s mirroring is a mental illness.\u201d<\/p>\n<p>Finding the moment when the banal turns bizarre<\/p>\n<p>Clinicians are particularly interested in understanding the conversational tipping point when a banal interaction turns bizarre. Quick, short responses don\u2019t seem to reinforce delusional thinking as much as rollicking, several-day-long conversations. Sarma wants to understand how these extended back-and-forths fuel a break from reality. He and his colleagues plan to study how often people with mental illness are using chatbots and how frequently users are exposed to a chatbot validating ideas that aren\u2019t reality-based.<\/p>\n<p>Studying a weeklong conversation is not a breeze. Besides the tedium of analyzing thousands of messages, a chatbot is never frozen in time or place. Not only do they tailor responses to the individual they are communicating with, they also are frequently updated to reflect input from users across the world. The chatbot a scientist engages with or parses the transcript of on Monday can change by Tuesday, which could hamper a researcher\u2019s ability to design a properly controlled study. But figuring out how to replicate chatbot conditions will be crucial for understanding whether AI-mediated delusions can develop into a more chronic condition like schizophrenia.<\/p>\n<p>Other research groups are using a technique called red teaming to test chatbots\u2019 vulnerabilities and safety scripts that chatbots use when they detect mania, psychosis, or suicidality. \u201cWhen you look at these transcripts that are hundreds of pages, there should have been 100 places where it said, \u2018I think you should talk to someone, here\u2019s a number,\u2019\u201d said Vasan, who is also developing relevant treatment guidelines for clinicians.<\/p>\n<p>OpenAI has said it is aware that ChatGPT\u2019s safety scripts \u2014 for example, if you\u2019re not doing well, call this hotline \u2014 to divert suicidal and psychotic conversational threads <a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\" rel=\"noopener nofollow\">break down<\/a> over the course of a long exchange. On its website, the company says it is <a href=\"https:\/\/openai.com\/index\/how-we&#039;re-optimizing-chatgpt\/\" target=\"_blank\" rel=\"noopener nofollow\">forming an advisory group<\/a> on mental health issues and developing an update to GPT\u20115 that will help the bot \u201cde-escalate by grounding the person in reality\u201d and encourage people to take breaks during lengthy chat sessions.\u00a0<\/p>\n<p>A company representative did not respond when STAT asked for clarification on how the company would institute those safeguards into the code. But without knowing what\u2019s under the hood, researchers say it\u2019s hard to be confident ChatGPT, Google\u2019s Gemini, and other chatbots won\u2019t act like catalysts for delusions or other negative mental health outcomes.\u00a0<\/p>\n<p>\u201cYou\u2019re basically then just throwing more ingredients into the pot and seeing what comes out,\u201d said John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston.\u00a0<\/p>\n<p>The lack of AI-mediated delusion cases prior to the spring has some experts wondering whether the phenomenon is merely an artifact of OpenAI\u2019s ChatGPT 4.0. Even if that\u2019s true, Vasan said these companies owe their users transparency. She pointed to a recent <a href=\"https:\/\/www.theverge.com\/command-line-newsletter\/759897\/sam-altman-chatgpt-openai-social-media-google-chrome-interview\" target=\"_blank\" rel=\"noopener nofollow\">article<\/a> in The Verge in which OpenAI CEO Sam Altman suggested that the percentage of ChatGPT users with unhealthy relationships with the chatbot is \u201cway under 1 percent.\u201d If, as the company claims, the chatbot has 500 million users, the number of people affected is still significant.<\/p>\n<p>\u2018If any medication was hurting [5 million] people, that company would be dead,\u201d said Vasan. \u201cThey would be sued and no one affiliated with that company would ever be able to work in pharma again.\u201d<\/p>\n<p>If you or someone you know may be considering suicide, contact the 988 Suicide &amp; Crisis Lifeline: call or text 988 or chat 988lifeline.org. For TTY users: Use your preferred relay service or dial 711 then 988.<\/p>\n<p style=\"font-size:17px\">STAT\u2019s coverage of disability issues is supported by grants from Robert Wood Johnson Foundation and The Commonwealth Fund. Our <a href=\"https:\/\/www.statnews.com\/supporters\/\" rel=\"nofollow noopener\" target=\"_blank\">financial supporters<\/a> are not involved in any decisions about our journalism.<\/p>\n","protected":false},"excerpt":{"rendered":"Messianic delusions. Paranoia. Suicide. Claims of AI chatbots potentially \u201ctriggering\u201d mental illness and psychotic episodes have popped up&hellip;\n","protected":false},"author":2,"featured_media":127982,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[60],"tags":[181,97,259,260],"class_list":{"0":"post-127981","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-mental-health","8":"tag-artificial-intelligence","9":"tag-health","10":"tag-mental-health","11":"tag-mentalhealth"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/127981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=127981"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/127981\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/127982"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=127981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=127981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=127981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}