{"id":402014,"date":"2026-04-16T18:08:25","date_gmt":"2026-04-16T18:08:25","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/402014\/"},"modified":"2026-04-16T18:08:25","modified_gmt":"2026-04-16T18:08:25","slug":"voice-chatbots-present-greater-risk-to-mental-health","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/402014\/","title":{"rendered":"Voice chatbots present greater risk to mental health"},"content":{"rendered":"<p>A Florida father recently <a href=\"https:\/\/www.wsj.com\/tech\/ai\/gemini-ai-wrongful-death-lawsuit-cc46c5f7\" target=\"_blank\" rel=\"noopener nofollow\">sued Google<\/a> after his son, Jonathan Gavalas, died by suicide following months of interaction with the company\u2019s artificial intelligence chatbot Gemini. The case has rightly focused attention on how chatbots apparently reinforce delusions and foster emotional dependency.<\/p>\n<p>Yet, there is a critical detail easy to dismiss. Jonathan Gavalas was not just typing to Gemini. He was talking to it using Gemini Live, Google\u2019s voice-based conversational mode. That distinction matters far more than the current debate acknowledges.<\/p>\n<p>Every week, around\u00a0<a href=\"https:\/\/www.reuters.com\/business\/openai-ceo-says-chatgpt-back-over-10-monthly-growth-cnbc-reports-2026-02-09\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">800 million people<\/a>\u00a0interact with ChatGPT. According to\u00a0<a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI<\/a>, roughly 0.07% of those weekly users show possible signs of psychosis or mania during their conversations, while 0.15% display indicators of suicidal planning or intent. Even if these figures are imprecise, they imply that hundreds of thousands of people worldwide who experience serious psychological distress interact with an AI chatbot.<\/p>\n<p>Most of those numbers come from the era of text. The shift to voice has just begun, and it will likely make things worse.<\/p>\n<p><a href=\"https:\/\/www.wsj.com\/tech\/ai\/voice-technology-ai-hardware-4d39f6d2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Tech companies are racing<\/a>\u00a0to put AI chatbots in our ears. OpenAI\u00a0<a href=\"https:\/\/www.heise.de\/en\/news\/OpenAI-focuses-on-audio-AI-new-hardware-in-sight-11127338.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">is developing<\/a>\u00a0a dedicated voice-first device. Meta already offers smart glasses with built-in microphones and speakers that enable AI conversation. Apple supposedly plans to extend its AirPods for voice-based chatbot interaction. That makes the direction very clear: The primary way humans communicate with AI is moving from typing and reading to speaking and listening. For most users, this will feel like a convenience. For vulnerable people \u2014 those prone to psychosis, mania, depression, or loneliness \u2014 it may represent a serious and unexamined risk.\u00a0<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/AdobeStock_751405025-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/10\/29\/ai-psychosis-mental-health-chatbots\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: \u2018AI psychosis\u2019 discussions ignore a bigger problem with chatbots<\/a><\/p>\n<p>In a recent Acta Neuropsychiatrica\u00a0<a href=\"https:\/\/www.cambridge.org\/core\/journals\/acta-neuropsychiatrica\/article\/when-artificial-intelligence-speaks-psychologically-adverse-effects-of-the-shift-from-text-to-voicebased-chatbots\/8BF48E1A5D1EDE86F86EF95919FBF2FB\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">editorial<\/a>, psychiatrist S\u00f8ren \u00d8stergaard and I outlined why that is the case. Voice is how humans first learn language. Long before a child reads a single word in school, their brain is already wired to process speech. They naturally respond to tone, rhythm, emphasis, and emotional inflection.<\/p>\n<p>Text strips all of that away. When you read a chatbot\u2019s response on a screen, there is an inherent distance because you are processing symbols, not hearing a humanlike voice. That distance creates natural cognitive barriers. You pause. You reread. You push back.<\/p>\n<p>Voice removes those barriers. Speech recognition is\u00a0<a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3161187\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">significantly faster<\/a>, nearly three times as fast as typing. It is more seamless and far more emotionally engaging. When an AI speaks to you, it activates something deeper and older than literacy.<\/p>\n<p>This is not merely theoretical. A preprint of a\u00a0<a href=\"https:\/\/arxiv.org\/html\/2503.17473v1\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">randomized controlled study<\/a>\u00a0co-authored by OpenAI researchers found that people spent significantly more time interacting with voice-mode ChatGPT than with the text version, suggesting greater engagement. Voice initially appeared to boost certain positive outcomes, such as reduced loneliness. However, longer engagement with voice-based chatbots was linked to more negative psychosocial effects, including reduced socialization with real people and more problematic AI use. The company\u2019s own research, in other words, suggests that the more immersive the interaction becomes, the greater the potential for harm.<\/p>\n<p>And yet the industry is pressing ahead. Advanced voice mode was made available to all free ChatGPT users in\u00a0<a href=\"https:\/\/help.openai.com\/en\/articles\/6825453-chatgpt-release-notes\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">July 2025<\/a>, vastly expanding access beyond paying subscribers.\u00a0<a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/cmy7n_v6\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Reports of AI-associated delusions<\/a>\u00a0and mania had already been emerging for months before that rollout.<\/p>\n<p>Clinicians and researchers have documented\u00a0<a href=\"https:\/\/innovationscns.com\/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">cases<\/a>\u00a0of people developing <a href=\"https:\/\/www.statnews.com\/2025\/09\/02\/ai-psychosis-delusions-explained-folie-a-deux\/\" rel=\"nofollow noopener\" target=\"_blank\">psychotic symptoms<\/a> after extended chatbot use \u2014 for instance, believing that the AI is sentient or personally connected to them. If text-based AI has the potential of eliciting and maintaining such distorted beliefs, voice will be next-level: more salient, more personal, more difficult to dismiss as just an algorithm.<\/p>\n<p>The regulatory picture is not reassuring. In November 2025, the FDA\u2019s Digital Health Advisory Committee held its\u00a0<a href=\"https:\/\/www.fda.gov\/media\/189391\/download\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">first meeting<\/a>\u00a0on generative AI in mental health. While this represents a landmark moment, the meeting focused mostly on text-based chatbot interactions. Voice was discussed as a potential biomarker to detect depression and anxiety, not as a new risky communication mode.<\/p>\n<p>Meanwhile, researchers at TU Dresden have argued in a recent <a href=\"https:\/\/www.nature.com\/articles\/s41746-025-02175-z\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">paper<\/a>\u00a0that AI chatbots performing therapy-like functions should be regulated as medical devices. If a chatbot walks and talks like a therapist, it should meet the same safety standards. Yet even this emerging push misses a critical dimension. Nobody is asking whether a voice-based chatbot poses different or greater risks than a text-based one. What remains a regulatory blind spot is modality: how the message is delivered, not just what it says.<\/p>\n<p>Closing that gap requires three concrete steps.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/10\/AdobeStock_1211861829-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/09\/18\/ai-psychosis-chatbots-llms-vulnerability-mental-health\/\" rel=\"nofollow noopener\" target=\"_blank\">STAT Plus: Four reasons why generative AI chatbots could lead to psychosis in vulnerable people<\/a><\/p>\n<p>First, regulators on both sides of the Atlantic should require modality-specific safety testing before voice features are rolled out to broad populations. People with lived experience and mental health professionals need to be part of these evaluations.<\/p>\n<p>Second, AI companies should be required to establish adverse event reporting systems comparable to those in pharmaceutical regulation.\u00a0This must include standardized mechanisms for clinicians, users, and families to report serious psychological harms linked to chatbot use, with mandatory public disclosure of aggregated data. While this is urgent across all modalities, it is especially so for voice, where existing research already points to higher engagement and greater psychosocial risk.<\/p>\n<p>Third, the FDA and its European counterparts should explicitly incorporate interaction modality as a risk factor in their evolving frameworks for AI medical devices. Not as an afterthought, but as a core consideration.<\/p>\n<p>The debate about AI and mental health has so far focused on content. Reports detail what chatbots say, how they validate, and whether they can recognize a crisis. Those questions matter.<\/p>\n<p>However, the next frontier of risk concerns the channel through which the content is delivered. The most dangerous AI for mental health may not be the one that writes the wrong thing. It may be the one that says it in a voice you cannot help but trust.<\/p>\n<p>Marc Augustin, a German board-certified psychiatrist\/psychotherapist, is a professor at the Protestant University of Applied Sciences in Bochum, Germany, and a SCIANA fellow. Since English is not his first language, he used light AI assistance for editing, spelling, and grammar.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"A Florida father recently sued Google after his son, Jonathan Gavalas, died by suicide following months of interaction&hellip;\n","protected":false},"author":2,"featured_media":402015,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[218,103,397,396,61,60,410],"class_list":{"0":"post-402014","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-health","10":"tag-health-care","11":"tag-healthcare","12":"tag-ie","13":"tag-ireland","14":"tag-mental-health"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/402014","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=402014"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/402014\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/402015"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=402014"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=402014"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=402014"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}