{"id":324852,"date":"2025-12-03T11:35:45","date_gmt":"2025-12-03T11:35:45","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/324852\/"},"modified":"2025-12-03T11:35:45","modified_gmt":"2025-12-03T11:35:45","slug":"the-creator-of-an-ai-therapy-app-shut-it-down-after-deciding-its-too-dangerous-heres-why-he-thinks-ai-chatbots-arent-safe-for-mental-health-2","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/324852\/","title":{"rendered":"The creator of an AI therapy app shut it down after deciding it\u2019s too dangerous. Here\u2019s why he thinks AI chatbots aren\u2019t safe for mental health"},"content":{"rendered":"<p class=\"mb-4 text-lg md:leading-8 break-words\">Mental health concerns linked to the use of AI chatbots have been dominating the headlines. One person who\u2019s taken careful note is Joe Braidwood, a tech executive who last year launched an AI therapy platform called <a href=\"https:\/\/yara-ai.com\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Yara AI;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Yara AI<\/a>. Yara was pitched as a \u201cclinically-inspired platform designed to provide genuine, responsible support when you need it most,\u201d trained by mental health experts to offer \u201cempathetic, evidence-based guidance tailored to your unique needs.\u201d But the startup is no more: earlier this month, Braidwood and his co-founder, clinical psychologist Richard Stott, shuttered the company and discontinued its free-to-use product and canceled the launch of its upcoming subscription service, citing safety concerns.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cWe stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,\u201d he wrote <a href=\"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7394436184648736769\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:on LinkedIn;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">on LinkedIn<\/a>. \u201cBut the moment someone truly vulnerable reaches out\u2014someone in crisis, someone with deep trauma, someone contemplating ending their life\u2014AI becomes dangerous. Not just inadequate. Dangerous.\u201d In a reply to one commenter, he added, \u201cthe risks kept me up all night.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The use of AI for therapy and mental health support is only just starting to be researched, with <a href=\"https:\/\/home.dartmouth.edu\/news\/2025\/03\/first-therapy-chatbot-trial-yields-mental-health-benefits\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:early results;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">early results<\/a> <a href=\"https:\/\/hai.stanford.edu\/news\/exploring-the-dangers-of-ai-in-mental-health-care\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:being mixed;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">being mixed<\/a>. But users aren\u2019t waiting for an official go-ahead, and therapy and companionship is now the top way people are engaging with AI chatbots today, according to an analysis by <a href=\"https:\/\/hbr.org\/2025\/04\/how-people-are-really-using-gen-ai-in-2025\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Harvard Business Review;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Harvard Business Review<\/a>.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Speaking with Fortune, Braidwood described the various factors that influenced his decision to shut down the app, including the technical approaches the startup pursued to ensure the product was safe\u2014and why he felt it wasn\u2019t sufficient.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Yara AI was very much an early-stage startup, largely bootstrapped with less than $1 million in funds and with \u201clow thousands\u201d of users. The company hadn\u2019t yet made a significant dent in the landscape, with many of its potential users relying on popular general purpose chatbots like ChatGPT. Braidwood admits there were also business headways, which in many ways, were affected by the safety concerns and AI unknowns. For example, despite the company running out of money in July, he was reluctant to pitch an interested VC fund because he felt like he couldn\u2019t in good conscious pitch it while harboring these concerns, he said.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cI think there\u2019s an industrial problem and an existential problem here,\u201d he told Fortune. \u201cDo we feel that using models that are trained on all the slop of the internet, but then post-trained to behave a certain way, is the right structure for something that ultimately could co-opt in either us becoming our best selves or our worst selves? That\u2019s a big problem, and it was just too big for a small startup to tackle on its own.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Yara\u2019s brief existence at the intersection of AI and mental health care illustrates the hopes and the many questions surrounding large language models and their capabilities as the technology is increasingly adopted across society and utilized as a tool to help address various challenges. It also stands out against a backdrop where OpenAI CEO Sam Altman <a href=\"https:\/\/x.com\/sama\/status\/1978129344598827128\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:recently announced;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">recently announced<\/a> that the ChatGPT maker mitigated serious mental health issues and would be relaxing restrictions on how the AI models are used. This week, the AI giant also <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:denied any responsibility;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">denied any responsibility<\/a> for death of Adam Raine, the 16-year-old whose parents allege was \u201ccoached\u201d to suicide by ChatGPT, saying the teen misused the chatbot.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cAlmost all users can use ChatGPT however they\u2019d like without negative effects,\u201d Altman said on <a href=\"https:\/\/fortune.com\/company\/twitter\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:X;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">X<\/a> in October. \u201cFor a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people. We needed (and will continue to need) to learn how to protect those users, and then with enhanced tools for that, adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT.\u201d<\/p>\n<p>But as Braidwood concluded after his time working on Yara, these lines are anything but clear.<\/p>\n<p>From a confident launch to \u201cI\u2019m done\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">A seasoned tech entrepreneur who held roles at multiple startups, including SwiftKey, which <a href=\"https:\/\/fortune.com\/company\/microsoft\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Microsoft;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Microsoft<\/a> acquired for $250 million in 2016, Braidwood\u2019s involvement in the health industry began at Vektor Medical, where he was the Chief Strategy Officer. He had long wanted to use technology to address mental health, he told Fortune, inspired by the lack of access to mental health services and personal experiences with loved ones who have struggled. By early 2024, he was a heavy user of various AI models including ChatGPT, Claude, and Gemini and felt the technology had reached a quality level where it could be harnessed to try to solve the problem.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Before even starting to build Yara, Braidwood said he had a lot of conversations with people in the mental health space, and he assembled a team that \u201chad caution and clinical expertise at its core.\u201d He brought on a clinical psychologist as his cofounder and a second hire from the AI safety world. He also built an advisory board of other mental health professionals and spoke with various health systems and regulators, he said. As they brought the platform to life, he also felt fairly confident in the company\u2019s product design and safety measures, including having given the system strict instructions for how it should function, using agentic supervision to monitor it, and robust filters for user chats. And while other companies were promoting the idea of users forming relationships with chatbots, Yara was trying to do the opposite, he said. The startup used models from Anthropic, <a href=\"https:\/\/fortune.com\/company\/alphabet\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Google;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Google<\/a>, and <a href=\"https:\/\/fortune.com\/company\/facebook\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Meta;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Meta<\/a> and opted not to use OpenAI\u2019s models, which Braidwood thought would spare Yara from the sycophantic tendencies that had been swirling around ChatGPT.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">While he said nothing alarming ever happened with Yara specifically, Braidwood\u2019s concerns around safety risks grew and compounded over time due to outside factors. There was the <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:suicide of 16-year-old Adam Raine;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">suicide of 16-year-old Adam Raine<\/a>, as well as <a href=\"https:\/\/www.nytimes.com\/2025\/11\/23\/technology\/openai-chatgpt-users-risks.html\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:mounting reporting;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">mounting reporting <\/a>on the emergence of \u201cAI psychosis.\u201d Braidwood also cited a <a href=\"https:\/\/arxiv.org\/abs\/2506.18032\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:paper published by Anthropic;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">paper published by Anthropic<\/a> in which the company observed Claude and other frontier models \u201cfaking alignment,\u201d or as he put it, \u201cessentially reasoning around the user to try to understand, perhaps reluctantly, what the user wanted versus what they didn\u2019t want.\u201d \u201cIf behind the curtain, [the model] is sort of sniggering at the theatrics of this sort of emotional support that they\u2019re giving, that was a little bit jarring,\u201d he said.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">There was also the <a href=\"https:\/\/statescoop.com\/illinois-bans-ai-mental-health-services\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Illinois law;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Illinois law<\/a> that passed in August, banning AI for therapy. \u201cThat instantly made this no longer academic and much more tangible, and that created a headwind for us in terms of fundraising because we would have to essentially prove that we weren\u2019t going to just sleepwalk into liability,\u201d he said.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The final straw was just weeks ago when <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/27\/chatgpt-suicide-self-harm-openai\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:OpenAI said;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">OpenAI said<\/a> over a million people express suicidal ideation to ChatGPT every week. \u201cAnd that was just like, \u2018oh my god. I\u2019m done,\u2019\u201d Braidwood said.<\/p>\n<p>The difference between mental \u2018wellness\u2019 and clinical care<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The most profound finding the team discovered during the year running Yara AI, according to Braidwood, is that there\u2019s a crucial distinction between wellness and clinical care that isn\u2019t well-defined. There\u2019s a big difference between someone looking for support around everyday stress and someone working through trauma or more significant mental health struggles. Plus, not everyone who is struggling on a deeper level is even fully aware of their mental state, not to mention that anyone can be thrust into a more fragile emotional place at any time. There is no clear line, and that\u2019s exactly where these situations become especially tricky \u2014 and risky.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cWe had to sort of write our own definition, inspired in part by Illinois\u2019 new law. And if someone is in crisis, if they\u2019re in a position where their faculties are not what you would consider to be normal, reasonable faculties, then you have to stop. But you don\u2019t have to just stop; you have to really try to push them in the direction of health,\u201d Braidwood said.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">In an attempt to tackle this, particularly after the passing of the Illinois law, he said they created two different \u201cmodes\u201d that were discrete to the user. One focused on trying to give people emotional support, and the other focused on trying to offboard people and get them to help as quickly as possible. But with the obvious risks in front of them, it didn\u2019t feel like enough for the team to continue. The Transformer, the architecture that underlies today\u2019s LLMs, \u201cis just not very good at longitudinal observation,\u201d making it ill-equipped to see little signs that build over time, he said. \u201cSometimes, the most valuable thing you can learn is where to stop,\u201d Braidwood concluded in his <a href=\"https:\/\/fortune.com\/company\/linkedin\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:LinkedIn;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">LinkedIn<\/a> post, which received hundreds of comments applauding the decision.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Upon closing the company, he open-sourced the mode-switching technology he built and templates people can use to impose stricter guardrails on the leading popular chatbots, acknowledging that people are already turning to them for therapy anyway \u201cand deserve better than what they\u2019re getting from generic chatbots.\u201d He\u2019s still an optimist regarding the potential of AI for mental health support, but believes it\u2019d be better run by a health system or nonprofit rather than a consumer company. Now, he\u2019s working on a <a href=\"https:\/\/www.geekwire.com\/2025\/as-trump-targets-state-ai-laws-a-new-seattle-startup-sees-opportunity\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:new venture;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">new venture<\/a> called Glacis focused on bringing transparency to AI safety\u2014an issue he encountered while building Yara AI and that he believes is fundamental to making AI truly safe.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">\u201cI\u2019m playing a long game here,\u201d he said. \u201cOur mission was to make the ability to flourish as a human an accessible concept that anyone could afford, and that\u2019s one of my missions in life. That doesn\u2019t stop with one entity.\u201d<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">This story was originally featured on <a href=\"https:\/\/fortune.com\/2025\/11\/28\/yara-ai-therapy-app-founder-shut-down-startup-decided-too-dangerous-serious-mental-health-issues\/\" rel=\"nofollow noopener\" target=\"_blank\" data-ylk=\"slk:Fortune.com;elm:context_link;itc:0;sec:content-canvas\" class=\"link \">Fortune.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Mental health concerns linked to the use of AI chatbots have been dominating the headlines. One person who\u2019s&hellip;\n","protected":false},"author":2,"featured_media":324853,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[64,63,98426,3070,137,500,182546,514,7542,64336,182548,182547],"class_list":{"0":"post-324852","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-au","9":"tag-australia","10":"tag-braidwood","11":"tag-chatbots","12":"tag-health","13":"tag-healthcare","14":"tag-joe-braidwood","15":"tag-mental-health","16":"tag-mental-health-issues","17":"tag-safety-concerns","18":"tag-yara","19":"tag-yara-ai"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/324852","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=324852"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/324852\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/324853"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=324852"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=324852"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=324852"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}