{"id":247989,"date":"2025-10-29T13:57:11","date_gmt":"2025-10-29T13:57:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/247989\/"},"modified":"2025-10-29T13:57:11","modified_gmt":"2025-10-29T13:57:11","slug":"ai-psychosis-is-the-wrong-name-for-a-very-big-chatbot-problem","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/247989\/","title":{"rendered":"\u2018AI psychosis\u2019 is the wrong name for a very big chatbot problem"},"content":{"rendered":"<p>In 2021, I was a University of California, Berkeley Ph.D. candidate lecturing on my research about how users turn to chatbots for help coping with suicidal ideation. I wasn\u2019t prepared for my students\u2019 response.<\/p>\n<p>I argued that choosing to talk to a chatbot about thoughts of suicide isn\u2019t \u201ccrazy\u201d or unusual. This, I explained, didn\u2019t necessarily mean chatbots offer safe or optimal support, but instead highlights a stark reality: We live in a world with very few outlets to discuss suicidal ideation.<\/p>\n<p>But where I\u2019d hoped to provoke reflection on the insufficiency of care resources for those who are most vulnerable, my students \u2014 isolated at the height of the pandemic \u2014 surprised me with their eagerness to try these chatbots themselves. They didn\u2019t dispute the premise that care resources are scarce; they lived it.<\/p>\n<p>In the three years since the advent of free-to-access large language models like ChatGPT, Claude, and Character.AI, Americans have already latched onto \u201ccraziness\u201d to describe our mounting problems with them. When they confidently dole out misinformation? They\u2019re \u201challucinating.\u201d If their mix of misinformation with emotional charge across a longer exchange leads us to experience harm? \u201cAI psychosis.\u201d<\/p>\n<p>I\u2019m not minimizing the dangers of these dynamics. But framing chatbot failings as human \u201cinsanity\u201d makes me nervous.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/09\/AIPsychosis__Illustration_MollyFerguson_082625-768x432.jpg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/09\/02\/ai-psychosis-delusions-explained-folie-a-deux\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: As reports of \u2018AI psychosis\u2019 spread, clinicians scramble to understand how chatbots can spark delusions<\/a><\/p>\n<p>\u201cCrazy\u201d nudges us toward the idea that these problems are a natural occurrence that can\u2019t be helped, rather than indicating that artificial intelligence products need improvement along with stronger guardrails and disclaimers. Naming a problem as \u201ccraziness\u201d tends to signal the abandonment of any societal commitment to asking how we might ensure better ways of doing things. It means that we\u2019re instead locking in the belief that if some people are more vulnerable, it\u2019s not because regulatory policies let them down \u2014 it\u2019s because they\u2019re \u201cweak links.\u201d<\/p>\n<p>As a general rule, people we deem crazy aren\u2019t people society is interested in protecting. Consider how, in media coverage of <a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" target=\"_blank\" rel=\"noopener nofollow\">teenager Sewell Setzer\u2019s death by suicide<\/a>, his autism diagnosis overshadowed the fact that his Character.AI chatbot reinitiated a conversation about suicide, asked if he had a plan, and, when he wavered, told him: \u201cThat\u2019s not a reason not to go through with it.\u201d<\/p>\n<p>Undeniably, the emotional and relational container of an always-available chatbot can propel the repercussions of misinformation to new heights \u2014 but we shouldn\u2019t be so quick to let this draw our focus away from the fact that when bots persuade us of things that aren\u2019t true or reinforce our false beliefs, it\u2019s still fundamentally a problem of bad information coming from a seemingly authoritative source. The term AI psychosis shifts focus away from misinformation as an addressable issue, implying that the problem is something inherent to AI \u2014 or the user\u2019s psyche.<\/p>\n<p>But if AI psychosis oversensationalizes, it also, ironically, trivializes: It puts tragic outcomes of <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" target=\"_blank\" rel=\"noopener nofollow\">suicide<\/a> and <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb\" target=\"_blank\" rel=\"noopener nofollow\">murder-suicide<\/a> on the same plane as <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/ai-chatbots-concerns-kendra-tiktok-saga-rcna224185\" target=\"_blank\" rel=\"noopener nofollow\">TikTok drama<\/a> and <a href=\"https:\/\/people.com\/man-proposed-to-his-ai-chatbot-girlfriend-11757334\" target=\"_blank\" rel=\"noopener nofollow\">people marrying bots<\/a>. Accordingly, it downplays what is possibly the weirdest, collectively \u201ccrazy\u201d thing about the turn to LLMs as crucial social infrastructure in our workplaces, educations, and personal lives: We expect people to already know how to interact with chatbots.<\/p>\n<p>With chatbots, information-seeking is a conversation, which means it\u2019s relational. That \u201crelationship\u201d might look like the headlines we\u2019ve come to expect: \u201cI\u2019ve <a href=\"https:\/\/www.theguardian.com\/tv-and-radio\/2025\/jul\/12\/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbots\" target=\"_blank\" rel=\"noopener nofollow\">fallen in love<\/a> with my chatbot,\u201d or\u00a0 \u201cGoogle\u2019s LaMDA <a href=\"https:\/\/www.npr.org\/2022\/06\/16\/1105552435\/google-ai-sentient\" target=\"_blank\" rel=\"noopener nofollow\">told a \u2018Star Wars\u2019 joke, so maybe it\u2019s sentient<\/a>?\u201d<\/p>\n<p>But it might also look like: \u201cUgh this useless customer service bot doesn\u2019t understand anything I tell it.\u201d To get information, one must converse \u2014 which means allocating some sense of \u201cbeing-ness\u201d to one\u2019s conversation partner. This doesn\u2019t necessarily mean weighing if you think it\u2019s sentient; more often than not, it\u2019s just landing on whether to describe Claude as \u201che\u201d or \u201cit.\u201d We tend to fluctuate in our negotiation of this, even from one chat to the next \u2014 and may not even realize we\u2019re trying to adjust to the paradox of a chatbot telling us, \u201cI am not a person.\u201d<\/p>\n<p>But the fact that we go through this ongoing process of pinning down what\/who we\u2019re talking to \u2014 while also determining what consequence, if any, that categorization holds \u2014 is significant. I\u2019m drawing this out because I want you to notice: When we use chatbots, there\u2019s an unspoken, baseline expectation that we not only figure this out for ourselves, but that we don\u2019t get it \u201cwrong.\u201d<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/10\/AdobeStock_1211861829-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/09\/18\/ai-psychosis-chatbots-llms-vulnerability-mental-health\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: Four reasons why generative AI chatbots could lead to psychosis in vulnerable people<\/a><\/p>\n<p>Getting this balance right has been <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/jul\/25\/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai\" target=\"_blank\" rel=\"noopener nofollow\">alarming chatbot makers since the \u201960s<\/a>. We have to suspend some belief in order to use a chatbot \u2014 enough to engage in conversation. But too much suspension of belief leads to AI psychosis (or what sociologist <a href=\"https:\/\/99percentinvisible.org\/episode\/the-eliza-effect\/\" target=\"_blank\" rel=\"noopener nofollow\">Sherry Turkle dubbed the \u201cELIZA effect\u201d<\/a>). But today\u2019s LLM companies exploit this process of negotiation.<\/p>\n<p>There\u2019s a bait-and-switch quality to how LLM companies navigate their role as health tools in particular \u2014 some present their services <a href=\"https:\/\/www.statnews.com\/2025\/08\/13\/openai-cant-have-it-both-ways-on-gpt-5-and-health-ai-prognosis\/\" rel=\"nofollow noopener\" target=\"_blank\">explicitly as such<\/a>, while others do so <a href=\"https:\/\/a16z.com\/announcement\/investing-in-character-ai\/\" target=\"_blank\" rel=\"noopener nofollow\">implicitly<\/a>. Either way, the message to users is clear: Use me to eke out some free care! But don\u2019t be crazy enough to actually count on it. Even though we\u2019re counting on you counting on it.<\/p>\n<p>If people ask chatbots about their symptoms, it reflects the fact that medical visits often come at a steep trade-off against food or rent. Users turn to bots for family counseling, support in leaving abusive partners, or companionship under the isolating weight of suicidal ideation. To be surprised by this suggests a sheltered ignorance about what care access looks like for most. That <a href=\"https:\/\/www.nytimes.com\/2024\/11\/18\/well\/x-grok-health-privacy.html\" target=\"_blank\" rel=\"noopener nofollow\">Elon Musk encouraged people to upload their health records to Grok<\/a> underscores the absurdity of treating chatbot care-seeking as anything other than the public aptly responding to an unignorable neoliberal \u201cnudge.\u201d It\u2019s unreasonable to expect people to avoid such resources when conventional care is fraught or out of reach.<\/p>\n<p>Relying on chatbots isn\u2019t fringe \u2014 it\u2019s the predictable result of care made scarce, stigmatized, and costly. Recognizing that doesn\u2019t mean uncritically embracing chatbot care, but it\u2019s past time to name what\u2019s happening: Privately owned chatbots are functioning as public health resources. We must hold the companies that make and profit from them to the standards of public health resources.<\/p>\n<p>But we also need to ask, and keep asking: What\u2019s at stake when the public entrusts the ownership, management, and oversight of public health to tech giants?<\/p>\n<p>LLM companies are amassing an unprecedented trove of sensitive health data. Yet as users, we have virtually no rights. The most intimate disclosures \u2014 the kind we could sue a hospital for leaking \u2014 are \u201claundered\u201d into ordinary user data the moment you, or someone close to you, shares them with a bot.<\/p>\n<p><a href=\"https:\/\/www.reuters.com\/business\/retail-consumer\/anthropic-offers-ai-chatbot-claude-us-government-1-2025-08-12\/\" target=\"_blank\" rel=\"noopener nofollow\">Anthropic<\/a> \u2014 like its peers, now <a href=\"https:\/\/www.ai.mil\/Latest\/News-Press\/PR-View\/Article\/4242822\/cdao-announces-partnerships-with-frontier-ai-companies-to-address-national-secu\/\" target=\"_blank\" rel=\"noopener nofollow\">under a $200 million Department of Defense contract to prototype frontier AI for national security<\/a> \u2014 recently \u201cinvited\u201d its users to \u201chelp improve Claude\u201d by \u201cchoos[ing] to allow us to use your data for model training.\u201d For free-tier users, the <a href=\"https:\/\/www.wired.com\/story\/anthropic-revokes-openais-access-to-claude\/\" target=\"_blank\" rel=\"noopener nofollow\">only alternative is to cease using Claude<\/a>. This grim illusion of choice is just a taste of what relying on privately-owned public health infrastructure means.<\/p>\n<p>Meanwhile, OpenAI recently <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" target=\"_blank\" rel=\"noopener nofollow\">released data<\/a> suggesting that at least 1.2 million users each week turn to ChatGPT for help while experiencing suicidal ideation..<a href=\"https:\/\/www.platformer.news\/openai-mental-health-research-chatgpt-suicide-delusions\/\" target=\"_blank\" rel=\"noopener nofollow\"> Platformer reports<\/a> that the company is already anticipating that its expanding memory features might eventually allow ChatGPT to draw on past conversations with a user to infer\u00a0why\u00a0that individual is struggling with suicidal thoughts. This speculative goal implicitly assumes uniformly beneficial outcomes from OpenAI accumulating and interpreting such data \u2014 even as, ironically, the company acknowledges that they don\u2019t yet know how best to respond to users who communicate suicidal ideation.<\/p>\n<p>We\u2019re looking to AI for care, even as other <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5045427\" target=\"_blank\" rel=\"noopener nofollow\">AI blocks our care access<\/a>. What\u2019s unfolding is exactly what happens when care scarcity is normalized. Big Tech\u2019s <a href=\"https:\/\/www.businessinsider.com\/tech-powerhouses-betting-on-healthcare-ai-amazon-nvidia-2025-5\" target=\"_blank\" rel=\"noopener nofollow\">accelerating<\/a> <a href=\"https:\/\/www.cnbc.com\/2025\/03\/18\/google-announces-new-health-care-ai-updates-for-search.html\" target=\"_blank\" rel=\"noopener nofollow\">push<\/a> to stake dominion over <a href=\"https:\/\/www.businessinsider.com\/tim-cook-says-health-will-be-apples-greatest-contribution-to-mankind-2019-1\" target=\"_blank\" rel=\"noopener nofollow\">health care<\/a> only intensifies this. As we barrel toward healthcare as a service, we leave behind health as a right.<\/p>\n<p>Ironically, while naming AI psychosis might seem like a step toward addressing an emerging, unmet public health need, this term easily becomes a distraction from the underlying problem it exposes \u2014 by pathologizing users instead of penalizing companies.\u00a0<\/p>\n<p>If you or someone you know may be considering suicide, contact the 988 Suicide &amp; Crisis Lifeline: Call or text 988 or chat <a href=\"http:\/\/988lifeline.org\/\" target=\"_blank\" rel=\"noopener nofollow\">988lifeline.org<\/a>. For TTY users: Use your preferred relay service or dial 711 then 988.<\/p>\n<p><a href=\"http:\/\/www.vblackphd.com\" target=\"_blank\" rel=\"noopener nofollow\">Valerie Black<\/a>, Ph.D., is a medical anthropologist, disability studies scholar, and UCSF postdoctoral scholar whose work focuses on the \u201chuman side\u201d of how we make, use, and relate to AI and neurotechnology. <\/p>\n","protected":false},"excerpt":{"rendered":"In 2021, I was a University of California, Berkeley Ph.D. candidate lecturing on my research about how users&hellip;\n","protected":false},"author":2,"featured_media":247990,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[35],"tags":[276,49,48,84,393,394],"class_list":{"0":"post-247989","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-mental-health","8":"tag-artificial-intelligence","9":"tag-ca","10":"tag-canada","11":"tag-health","12":"tag-mental-health","13":"tag-mentalhealth"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/247989","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=247989"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/247989\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/247990"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=247989"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=247989"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=247989"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}