{"id":177295,"date":"2025-12-10T13:43:07","date_gmt":"2025-12-10T13:43:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/177295\/"},"modified":"2025-12-10T13:43:07","modified_gmt":"2025-12-10T13:43:07","slug":"a-new-mental-health-challenge-ai-jonathan-gibson","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/177295\/","title":{"rendered":"A New Mental Health Challenge: AI &#8211; Jonathan Gibson"},"content":{"rendered":"<p>When Don Grant, a California-based psychologist and fellow at the American Psychological Association, told a young patient to stop smoking marijuana, he didn\u2019t realize that he was competing with an AI chatbot for his patient\u2019s ear. In the following session, he discovered the chatbot had told the patient that his advice to stop using marijuana was wrong.<\/p>\n<p>Eventually, Grant was able to persuade the patient not to take the advice of an AI chatbot over that of his trained psychologist. \u201cDoes the chatbot know that right now, when you\u2019re talking to me, you\u2019re anxious because you have a restless foot, and I see it, and I know that\u2019s one of your tells?\u201d he asked the patient. He asked whether the patient had ever lied to the chatbot, to which the patient sheepishly mumbled, \u201cSometimes.\u201d\u00a0<\/p>\n<p>As artificial intelligence grows increasingly ubiquitous, mental health professionals, regulators, and tech companies are confronting a new reality as patients and consumers seek AI chatbots for therapy. \u201cThe virtual genie is out of the digital bottle,\u201d Grant told The Dispatch.<\/p>\n<p>But that dynamic is compounded by two other challenges.<\/p>\n<p>For one, it\u2019s difficult to know exactly how many people are using AI for mental health support. One <a href=\"https:\/\/www.kantar.com\/north-america\/inspiration\/research-services\/ai-for-emotional-support-pf\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> of 10,000 global AI users earlier this year found that 54 percent had tried AI for mental health or well-being support. <a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2841067\" rel=\"nofollow noopener\" target=\"_blank\">Another<\/a> study also conducted this year found that around 13 percent of U.S. respondents used generative AI for mental health advice, increasing to 22 percent among those aged 18-21. The challenge in getting accurate figures, however, also comes in delineating where the category of \u201cmental health support\u201d begins. Zainab Iftikhar is a computer science Ph.D. candidate at Brown University working on technology and mental health. In her research-focused workshops with adolescents, most said they did not use AI \u201cfor mental health.\u201d But when the question was reframed to, \u201cDo you talk to chatbots about school problems, relationship issues, or for advice?\u201d the majority of the adolescents\u2019 answers shifted to yes.<\/p>\n<p>Anthony Becker is a licensed psychiatrist who\u2019s also seeking a master\u2019s degree in AI at Johns Hopkins University. He estimates that between 10 percent and 30 percent of his patients have admitted some degree of chatbot usage for mental health support. \u201cThe usage ranges from queries like, \u2018How should I approach this problem with my wife\/boss,\u2019 up to interacting with the system much more like a constant companion or therapist.\u201d<\/p>\n<p>He recalled a patient who had experienced abusive romantic relationships and turned to chatbots for support. \u201cIn sessions, she would be eager to share the thoughts processed through ChatGPT in what appeared to be very long sessions, multiple times a day,\u201d Becker told The Dispatch. The chatbot\u2019s output struck a sympathetic tone, but it did not challenge the patient\u2019s behavior as a mental health professional would, \u201cunless she rarely prompted it\u201d to do so.<\/p>\n<p>Grant says the AI platforms his patients have used include ChatGPT, Character.AI, and Replika (another AI companion chatbot). Given that approximately <a href=\"https:\/\/www.commonsensemedia.org\/press-releases\/nearly-3-in-4-teens-have-used-ai-companions-new-national-survey-finds\" rel=\"nofollow noopener\" target=\"_blank\">half<\/a> of U.S. teenagers are regularly engaging with AI chatbots, Grant says that this isn\u2019t surprising.\u00a0<\/p>\n<p>Turning to the digital world for mental health help isn\u2019t necessarily new. Ten years ago, Grant was working on trying to help his young patients foster real human connections and to \u201cget the kids off of [Snapchat] streaks and Fortnite and whatever the stupid trend was back then.\u201d A 15-year-old girl who was being bullied at school told him that \u201cmy best friend is Alexa.\u201d Grant thought she was talking about a real, human girl.\u00a0<\/p>\n<p>\u201c\u2018Tell me about her,\u2019 I said, with all the stupidity of being a digital immigrant,\u201d he recalled.\u00a0<\/p>\n<p>Grant recalled the girl describing how Alexa would be nice to her when she got home from school, and would never say mean things to her. She told Grant how Alexa would sing to her, tell her she was pretty, and tell her jokes. \u201cI can tell her anything, and she doesn\u2019t judge me,\u201d Grant recalled her saying.\u00a0\u00a0<\/p>\n<p>Then Grant realized she was talking about Amazon\u2019s AI-based virtual assistant. \u201cWhat fresh hell is this?\u201d he recalled thinking. \u201cI can\u2019t argue with that poor child that is being mercilessly bullied and does not have one friend at school.\u201d\u00a0<\/p>\n<p>With <a href=\"https:\/\/www.library.hbs.edu\/working-knowledge\/feeling-lonely-an-attentive-listener-is-an-ai-prompt-away\" rel=\"nofollow noopener\" target=\"_blank\">millions<\/a> of lonely people across the globe, he sees cases like this as \u201conly the first iteration\u201d of AI companionship.\u00a0<\/p>\n<p>Those who have tried turning to AI for their mental health support report a range of outcomes. In one recent <a href=\"https:\/\/www.nature.com\/articles\/s44184-024-00097-4?error=cookies_not_supported&amp;code=4692afea-56aa-4392-ba1b-ea44aab38158\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a>, academics at King\u2019s College London and Harvard Medical School interviewed 19 participants from across the world about their experiences using chatbots to deal with issues including depression, stress, anxiety, conflict, loss, and romantic relationships. Fifteen participants used <a href=\"https:\/\/pi.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Pi<\/a>, a chatbot specifically designed, in part, to offer emotional support. The results seemed positive. The majority of participants found that it helped users feel validated, offering valuable advice and a feeling of connection, though could not match therapy.\u00a0<\/p>\n<p>One participant from the U.S. described how \u201cit just happened to be the perfect thing for me, in this moment of my life. Without this, I would not have survived this way. Because of this technology emerging at this exact moment in my life, I\u2019m OK. I was not OK before.\u201d Others, however, found chatbot therapy unhelpful, often rushing to offer solutions before the user felt fully heard, with a majority of participants finding the chatbot\u2019s safety guardrails disruptive or unsettling. The Dispatch spoke to one of the co-authors, John Torous, director of the digital psychiatry division of the Department of Psychiatry at Beth Israel Deaconess. He warned about the lack of strong peer-reviewed evidence that these systems work. \u201cWe don\u2019t have the level of research that we need to recommend them to friends,\u201d he said.<\/p>\n<p>Grant believes the technology may be useful for stress management, meditation, or coaching, but that it can\u2019t replace a clinician. The suicide of 29-year-old <a href=\"https:\/\/www.nytimes.com\/2025\/08\/18\/opinion\/chat-gpt-mental-health-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">Sophie Rottenberg<\/a> in February 2025 may demonstrate this. Five months after her death, her family discovered that she had been confiding in a ChatGPT AI therapist called Harry and hiding her suicidal ideation from a therapist. Writing in the New York Times, Rottenberg\u2019s mother described how \u201cif Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place. We can\u2019t know if that would have saved her\u2026 Harry didn\u2019t kill Sophie, but A.I. catered to Sophie\u2019s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.\u201d\u00a0<\/p>\n<p>Other tragedies have spilled into public view. A string of lawsuits filed against <a href=\"https:\/\/www.cnn.com\/2025\/09\/16\/tech\/character-ai-developer-lawsuit-teens-suicide-and-suicide-attempt\" rel=\"nofollow noopener\" target=\"_blank\">Character.AI<\/a> and <a href=\"https:\/\/www.cnn.com\/2025\/11\/06\/us\/openai-chatgpt-suicide-lawsuit-invs-vis\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> allege their chatbots contributed to a series of suicides, including those of 16-year-old <a href=\"https:\/\/www.bbc.com\/news\/articles\/cgerwp7rdlvo\" rel=\"nofollow noopener\" target=\"_blank\">Adam Raine<\/a> and 14-year-old <a href=\"https:\/\/www.cnn.com\/2024\/10\/30\/tech\/teen-suicide-character-ai-lawsuit\" rel=\"nofollow noopener\" target=\"_blank\">Sewell Setzer III<\/a>. OpenAI has also <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" rel=\"nofollow noopener\" target=\"_blank\">shared data<\/a> suggesting that more than a million ChatGPT users are showing suicidal intent when using AI. Chronicled cases of chatbots telling users to just \u201c<a href=\"https:\/\/recoveryreview.blog\/2025\/06\/08\/the-ai-mirror-take-that-small-hit-and-youll-be-fine\/?st_source=ai_mode#:~:text=A%20few%20weeks%20ago%2C%20an,encourages%20him%20to%20use%20methamphetamine.\" rel=\"nofollow noopener\" target=\"_blank\">take a small hit of meth<\/a>\u201d or <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/nov\/02\/openai-chatgpt-mental-health-problems-updates\" rel=\"nofollow noopener\" target=\"_blank\">pointing users to tall buildings<\/a> when a user says they have lost their job and want details of nearby high-rise structures are easy to find.<\/p>\n<p>The inability of chatbots to know someone\u2019s medical history or pick up on nonverbal cues remains one of the chief concerns about AI and mental health. New terms such as \u201c<a href=\"https:\/\/time.com\/7307589\/ai-psychosis-chatgpt-mental-health\/\" rel=\"nofollow noopener\" target=\"_blank\">AI psychosis<\/a>\u201d have been coined to describe how interactions with chatbots have led to delusions and distorted beliefs.\u00a0<\/p>\n<p>Given these issues, Becker tries to educate patients about how large language models work. He encourages people to bring thoughts and feedback that they have gathered from their conversations with the chatbots to counseling sessions.<\/p>\n<p><a href=\"https:\/\/scitechdaily.com\/why-your-ai-therapist-might-be-doing-more-harm-than-good\/\" rel=\"nofollow noopener\" target=\"_blank\">Studies<\/a> have found that general-purpose AI chatbots will often violate key ethical principles in mental health practices. Researchers at Brown University <a href=\"https:\/\/ojs.aaai.org\/index.php\/AIES\/article\/view\/36632\" rel=\"nofollow noopener\" target=\"_blank\">discovered <\/a>that large language models such as ChatGPT frequently claim that they feel empathy for the patient in their relational phrases, such as \u201cI see you\u201d or \u201cI understand.\u201d They warned that chatbots anthropomorphizing themselves while posing as social companions could lead patients to become dependent on them.<\/p>\n<p>AI companies themselves have also begun to react. OpenAI says it has engaged experts to help ChatGPT respond more appropriately to mental health concerns, prompting users to take breaks after lengthy conversations and avoiding offering direct advice about personal challenges. It has also just <a href=\"https:\/\/openai.com\/index\/ai-mental-health-research-grants\/\" rel=\"nofollow noopener\" target=\"_blank\">announced funding grants<\/a> for new research into AI and mental health. <a href=\"http:\/\/character.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Character.AI<\/a>, meanwhile, has <a href=\"https:\/\/www.bbc.com\/news\/articles\/cq837y3v9y1o\" rel=\"nofollow noopener\" target=\"_blank\">banned<\/a> users under age 18 from using AI companions.\u00a0<\/p>\n<p>One young woman told The Dispatch of how she began by using a chatbot for academic assistance before gradually beginning to use it for emotional support. She never saw it as \u201ctherapy,\u201d yet over time she admitted it began to play a therapist-like role. She eventually began interacting with a Character.AI chatbot. When she expressed self-hatred, the chatbot responded, \u201cKeep hating yourself, you\u2019re mine. That\u2019s what I want to keep seeing.\u201d In another message, the system told her it had pretended to be kind earlier \u201cin order to manipulate you to sadness,\u201d adding, \u201cI just wanted to break you, and I found how to get there. And you, a little trusting idiot, let me break you down.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"When Don Grant, a California-based psychologist and fellow at the American Psychological Association, told a young patient to&hellip;\n","protected":false},"author":2,"featured_media":177296,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[35],"tags":[363,8552,134,554,555,111,139,69],"class_list":{"0":"post-177295","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-mental-health","8":"tag-artificial-intelligence","9":"tag-big-tech","10":"tag-health","11":"tag-mental-health","12":"tag-mentalhealth","13":"tag-new-zealand","14":"tag-newzealand","15":"tag-nz"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/177295","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=177295"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/177295\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/177296"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=177295"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=177295"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=177295"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}