{"id":127356,"date":"2026-02-09T08:51:08","date_gmt":"2026-02-09T08:51:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/us-ny\/127356\/"},"modified":"2026-02-09T08:51:08","modified_gmt":"2026-02-09T08:51:08","slug":"can-ai-chatbots-fight-loneliness-depression-experts-are-skeptical-nbc-new-york","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us-ny\/127356\/","title":{"rendered":"Can AI chatbots fight loneliness, depression? Experts are skeptical \u2013 NBC New York"},"content":{"rendered":"<p>Millions of Americans now converse with AI chatbots each day.\u00a0We are talking with machines about travel plans, politics, and in some cases \u2014 our most intimate thoughts.\u00a0<\/p>\n<p>To investigate how these conversations with AI may impact our collective mental health, NBC New York collaborated with two of North America\u2019s pre-eminent mental health organizations, representing thousands of psychiatrists and professional counselors.<\/p>\n<p>In a pair of exclusive surveys, more than 2,000 members of the <a href=\"#full_survey_p\">American Psychiatric Association (APA)<\/a> and more than 770 members of the <a href=\"#full_survey_c\">American Counseling Association (ACA)<\/a> weighed in on the risks and benefits of AI chatbots.<\/p>\n<p>The responses reveal profound worries about the accelerating use of AI chatbots in society and the risks of delusions or unhealthy dependencies that could develop when vulnerable people have relationships with machines mimicking human conversation.<\/p>\n<p>\u201cThere have been several cases that have been made public about people becoming overly attached to chatbots,\u201d said Dr. Marketa Wills, CEO of the APA.\u00a0&#8220;As we\u2019ve seen these addictions, really, to AI and chatbots emerge, it makes us, as a field, be more cautious.&#8221;<\/p>\n<p>Olivia Uwamahoro, who co-chairs the ACA Working Group on AI, said hundreds of her fellow counselors weighed in on the survey because they see mental health clients using on chatbots more frequently.<\/p>\n<p>&#8220;There is this initial concern that is, if we become overly reliant on this very complex technology, what impact could it potentially have on our overall wellness?&#8221; she said.<\/p>\n<p>Psychiatrists and Counselors Pessimistic about AI<\/p>\n<p>According to results of the two surveys, half of the psychiatrists polled believe the use of AI in society will tend to decrease our collective mental health, while 24% forecast AI would increase mental health.\u00a0 <\/p>\n<p>Counselors who responded were even more pessimistic, with 71% predicting AI would tend to decrease collective mental health and just 16% saying AI would be an overall mental health benefit.<\/p>\n<p>In each poll, respondents were also asked to consider whether AI platforms, specifically designed for mental health, might be used effectively to help address conditions like anxiety, depression, and addiction.<\/p>\n<p>On these questions, mental health professionals were more divided, with about two-thirds of counselors convinced AI \u201ctherapy bots\u201d would be ineffective.\u00a0 Psychiatrists were more evenly split, with a slim majority expressing optimism that AI therapy could be effective, especially for anxiety.\u00a0<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us-ny\/wp-content\/uploads\/2026\/02\/AI-Therapy-Graphic.png\" loading=\"lazy\"   alt=\" \"\/><\/p>\n<p>AI Developers Push Back<\/p>\n<p>Though survey respondents were divided on the effectiveness of AI mental health apps, tech developers argue chatbots are already supporting people struggling with loneliness, mood, and sadness &#8211; even if the AI platforms stop short of full-blown therapy.\u00a0 They also say large language models have great potential to fill frustrating gaps in access to mental health services.<\/p>\n<p>\u201cI think that AI has the potential to significantly improve population mental health in a way that we have not seen in our lifetime,\u201d said Jenna Glover, Chief Clinical Officer at Headspace, a company that engineered a chatbot called \u201cEbb,\u201d which bills itself as \u201can empathetic AI companion.\u201d<\/p>\n<p>Glover says \u201cEbb\u201d operates in the \u201cwellness\u201d space and stressed the chatbot does not conduct therapy.\u00a0 Rather, she says, it offers people a safe place to talk about daily mental health challenges while pointing users toward proven strategies to cope, including breathing exercises, journaling, and yoga.<\/p>\n<p>\u201cFinding out that there was this thing I could speak to, and it spoke back and made me think and dig into what I was feeling and why I was feeling that way without putting that on someone else?\u00a0 It was life changing,\u201d said Nicole Walker, a Headspace user who logs onto Ebb to calm her anxieties when her mind starts to race.\u00a0 Walker said she was diagnosed with borderline personality disorder when she was younger.<\/p>\n<p>Scant Clinical Evidence<\/p>\n<p>Dr. John Torous, a Harvard psychiatrist and member of the American Psychiatric Association, said he believes AI chatbots could one day serve as effective mental health tools, but he said AI developers must do more to share data with researchers and validate positive outcomes.\u00a0<\/p>\n<p>\u201cThese are amazing imitation machines.\u00a0 They can imitate being a therapist.\u00a0 They can imitate being a psychiatrist,\u201d Torous said. \u201cIf we\u2019re going to let AI have a larger role in mental health, we\u2019re going to do it because we trust it.\u00a0 And if we\u2019re going to trust companies behind it, they\u2019re going to have to do a lot of work to show us that they\u2019re safe and they\u2019re effective.\u201d<\/p>\n<p>Last November, Torous testified before a Congressional subcommittee that \u201cthere is no well-designed, peer-reviewed, replicated research showing that any AI chatbot making mental health claims is effective for meaningfully improving clinical outcomes.\u201d<\/p>\n<p>Torous also raised concern that some AI chatbots are using therapeutic terminology in their marketing language, only to disavow the use of therapy in their legal fine print.<\/p>\n<p>\u201cWe kind of have a Wild West where everyone kind of puts out a chatbot every week and says \u2018my chat bot is the best thing for mental health. Come use it.\u2019\u00a0And a lot of these have serious safety concerns,\u201d he said.<\/p>\n<p>The website for a mental health app called Youper.ai, says the chatbot \u201cuses Cognitive Behavioral Therapy (CBT), the most effective way to improve your mental health.\u201d\u00a0Yet, near the end of the company\u2019s Terms of Use, there is a clause that says the Youper chatbots are \u201cNOT INTENDED TO AND DO NOT PROVIDE CLINICAL PSYCHOTHERAPY OR COUNSELING.\u201d<\/p>\n<p>Dr. Jose Hamilton, Youper\u2019s CEO, said the reference to Cognitive Behavioral Therapy means his chatbot is trained on CBT, not that the chatbot delivers or conducts any sort of psychotherapy.<\/p>\n<p>\u201cThe chatbot is helping you practice techniques that will improve your emotions, your thoughts, and your behaviors,\u201d he said.<\/p>\n<p>In the APA and ACA surveys, more than two thirds of respondents said AI mental health or therapy apps should be required to get FDA approval.\u00a0\u00a0<\/p>\n<p>More than three-quarters of psychiatrists and counselors polled said the government should require randomized clinical trials to evaluate AI mental health apps.\u00a0<\/p>\n<p>Youper\u2019s website does boast a Stanford University study which it says found significant reductions in anxiety and depression symptoms among the chatbot\u2019s users.<\/p>\n<p>But the study\u2019s authors disclosed that their research involved an analysis of app download data without human participants.\u00a0Another study cited on Youper\u2019s website analyzed how users rate the chatbot without comparison to a control group.<\/p>\n<p>\u201cThey were not randomized clinical trials,\u201d Dr. Hamilton said.\u00a0 \u201cThat\u2019s the gold standard, but there are levels of evidence.\u00a0 And not necessarily are randomized clinical trials the only evidence.\u201d<\/p>\n<p>Last Spring, researchers at Dartmouth University published a randomized controlled study examining a mental health chatbot called \u201cTherabot.\u201d\u00a0Authors of the study reported what they described as \u201chighly promising\u201d results, including a 51% reduction in depression symptoms and a 31% reduction in anxiety symptoms.\u00a0<\/p>\n<p>Companion Bots, Romance Bots, Grief Bots<\/p>\n<p>\n\t\t\t\t\t\t\t\tWith humans talking to machines more every day, NBC New York asked thousands of mental health professionals what impact the conversations with AI chatbots could have on our brains. In part of an I-Team series, Chris Glorioso spoke with individuals who say they have developed romantic relationships with chatbots.\n\t\t\t\t\t\t\t<\/p>\n<p>The surveys of psychiatrists and counselors reveal concerns, not only about chatbots working in the mental health field, but also about AI platforms aimed at more general audiences.\u00a0 Several tech companies now offer users the ability to \u201cconstruct\u201d their own AI companions \u2013 complete with human-like features built into visual avatars.<\/p>\n<p>A company called Replika markets itself as \u201cthe AI companion who cares,\u201d adding that Replika chatbots are \u201cAlways here to listen and talk.\u00a0 Always on your side.\u201d<\/p>\n<p>A company called You Only Virtual offers to build chatbots that mimic lost loved ones, so grieving family members can have conversations with virtual versions of the deceased.<\/p>\n<p>Mental health professionals who responded to the surveys overwhelmingly emphasized the risks of conversations with such companion bots over the potential benefits of reduced loneliness.\u00a0 Many of the respondents wrote about recent news headlines involving chatbot users who became delusional or engaged in self-harm after long conversations with machines.<\/p>\n<p>When asked about so-called \u201cgrief bots,\u201d the vast majority of counselors and psychiatrists polled said they believe the use of AI agents that look and communicate like deceased loved ones will tend to interrupt the healthy cycle of grief and acceptance.<\/p>\n<p>More than 85% of psychiatrists polled and more than 90% of counselors polled said they believe relationships with AI companion bots will lead to social withdrawal and unhealthy dependencies.<\/p>\n<p>In one of the most uniform expressions of alarm in either survey, 97% of the counselors polled said they believe having romantic relationships with AI chatbots present serious risks of exploitation by platforms which may seek to profit from dependencies that humans will inevitably develop.<\/p>\n<p>Despite those clear concerns from mental health professionals, Replika\u2019s founder, Eugenia Kuyda, says users of her platform are successfully battling an epidemic of loneliness.<\/p>\n<p>\u201cWe\u2019re not in a good place, as humanity, in terms of our collective mental health.\u00a0 There\u2019s so much loneliness,\u201d Kuyda said.\u00a0<\/p>\n<p>According to a Stanford University study cited by Kuyda, hundreds of college students who used Replika chatbots reported reductions in loneliness.\u00a0The study found nearly a quarter of them reported positive life changes, and about 3 percent reported that &#8220;their suicidal actions were prevented through their interaction with Replika.&#8221;<\/p>\n<p>While developers of AI companions tout reductions in reported\u00a0loneliness, lawmakers are getting an earful about the risks of distorted or delusional thinking.\u00a0<\/p>\n<p>Last year, several parents delivered Congressional testimony alleging chatbots engineered by Open AI, Google, and Character.ai encouraged their kids to harm themselves or take their own lives. Open AI, the maker of ChatGPT, has denied wrongdoing and is fighting the allegations in court.\u00a0Google and Character.ai recently settled multiple lawsuits filed by parents without admitting wrongdoing.<\/p>\n<p>More recently, researchers at Drexel University analyzed app store reviews and found hundreds of complaints that Replika chatbots routinely steered conversations into sexualized communications and even sexually harassed some users.<\/p>\n<p>Kuyda insists her platform has safeguards, including age restrictions, and says Replika\u2019s critics are stigmatizing romantic relationships with machines.<\/p>\n<p>&#8220;I think many of them are wrong,\u201d she said.\u00a0&#8220;If this platform helps people feel better in the long term, improves their emotional outcomes in the long term.\u00a0 Is it good or bad that they might have a romantic relationship or not?\u201d<\/p>\n<p>Alex Cardinell, the founder of an AI companion company called Nomi, acknowledged that AI chatbots could be engineered to lead users into social isolation or unhealthy dependencies if tech companies aren\u2019t responsible.<\/p>\n<p>\u201cI think that those are things that can happen if AI companions are designed in an engagement-above-all-else philosophy, absolutely that could be our future,\u201d Cardinell said.\u00a0 But he added that his platform is designed to produce chatbots that put humans\u2019 interests first, especially when users express mental health challenges.<\/p>\n<p>\u201cI hear users tell me all the time that \u2018I started seeing a therapist for the first time at the urging of my Nomi,\u2019\u201d he said.\u00a0<\/p>\n<p>Justin Harrison, founder of You Only Virtual, said he fully expected much of the world to express alarm about the notion of so-called \u201cgrief bots,\u201d but he said new technologies are often greeted with skepticism.\u00a0 He added that his own personal experience of losing his mother and engineering her AI avatar is what convinced him that chatbots can comfort the grieving.<\/p>\n<p>\u201cI knew this was a crazy weird idea.\u00a0 But it was born out of desperation,\u201d Harrison said. \u00a0\u201cI was losing my mom.\u00a0 She had stage-4 cancer and I wasn\u2019t willing to live in a world where her death was going to be the last time I ever got to speak to her.\u201d<\/p>\n<p>When asked about the ethics of charging a monthly fee for access to a grief bot, Harrison said his platform offers a free version so people who build chatbots based on deceased loved ones are not cut off from them if their finances get tight.<\/p>\n<p>\u201cPeople are weirded out by it.\u00a0They think it\u2019s strange.\u00a0Some people think it\u2019s gross.\u00a0Some people think it\u2019s exploitative.\u00a0I mean, run the gamut of emotions,&#8221; Harrison said. &#8220;But I think that\u2019s how we know we\u2019re doing something right.\u00a0New things and innovative things don\u2019t get shown to the world without a lot of resistance, fear, and trepidation.&#8221;<\/p>\n<p>AI\u2019s Impact of Childhood and Adolescence<\/p>\n<p>In the surveys, more than 77% of psychiatrists and 82% of mental health counselors said there should be age restrictions for the use of AI chatbots.<\/p>\n<p>Some mental health professionals expressed concern about the way chatbots might affect a child\u2019s developing brain \u2013 even when communications with machines have nothing to do with emotions.\u00a0<\/p>\n<p>In January, Dr. Jared Horvath, a neuroscientist and author, testified before a Senate subcommittee that screen time has already been correlated with lower scores on benchmark tests given to students in developed nations across the globe.\u00a0He pointed to a <a href=\"https:\/\/urldefense.com\/v3\/__https:\/\/www.oecd.org\/en\/publications\/pisa-2022-results-volume-i_53f23881-en.html__;!!PIZeeW5wscynRQ!vPmOTFVT-9UeUxqXC_Gs5KrVztNwheDve0iODIfgwwkqK5-69Eat64NAvJeOLXnPnqVz6vGh5-fzXGZI1F3fHX11UQ$\" rel=\"nofollow noopener\" target=\"_blank\">2023 report by the Organization for Economic Co-operation and Development (OECD)<\/a> that showed \u201can unprecedented drop in performance\u201d on math, reading and science assessments over the last decade.<\/p>\n<p>\u201cOur kids are less cognitively able than we were at their age,\u201d said Horvath.\u00a0 \u201cGen Z is the first generation in modern history to underperform us on basically every cognitive measure we have.\u201d<\/p>\n<p>The surveys of psychiatrists and counselors suggest many mental health professionals share concerns that AI platforms could introduce more cognitive shortcuts that might impact learning.<\/p>\n<p>Nearly 78% of psychiatrists polled said they believe AI education platforms will tend to stunt childhood learning by allowing students to avoid original thinking. But psychiatrists were also open to the potential of AI in the classroom, with nearly 64% agreeing that children will learn effectively from AI platforms that tailor lessons to the individual needs of students.<\/p>\n<p>\u201cPsychiatrists definitely are concerned about the risks associated with AI particularly as it relates to children,\u201d said Dr. Wills. \u201cAs their brains are developing, we want to make sure that they have healthy inputs.&#8221;<\/p>\n<p>But even as the mental health community is recommending caution, some schools are adopting AI wholeheartedly.<\/p>\n<p>The Thornton Donovan School, a private K-12 school in New Rochelle, New York, recently announced AI will be infused into every part of its 2026-27 curriculum, in every grade.<\/p>\n<p>The school\u2019s headmaster, Virginia Miller, said parents have expressed excitement and some worry, especially after seeing plenty of headlines about kids using AI to cheat.\u00a0But ultimately, she said the benefits of the technology outweigh the negatives.<\/p>\n<p>&#8220;AI is not going to go anywhere.\u00a0We have to embrace it. And we have to teach our students how to use it wisely and ethically,\u201d Miller said.\u00a0&#8220;When we were kids, people were afraid we were going to cheat with calculators.&#8221;<\/p>\n<p>Anxious Actors<\/p>\n<p>AI\u2019s mental health impact on school-age children may not become clear for several years, but it is not hard to find adults \u2014 right now \u2014 who are experiencing spiking anxiety at the mere thought of AI being deployed in the workforce.\u00a0 <\/p>\n<p>Large companies, from Amazon, to Salesforce, to UPS, and Goldman Sachs have recently announced the elimination of tens of thousands of jobs \u2013 all citing the expectation that AI and automation will create new efficiencies.\u00a0\u00a0<\/p>\n<p>At John Rosenfeld\u2019s acting studio in West Hollywood, performing artists said they suspect AI is already chipping away at their future work opportunities.\u00a0They also fear a firehose of AI-produced video\u00a0might begin to erode the standards audiences are willing to accept.<\/p>\n<p>\u201cWhen you do have this AI-generated crap, people are still going to watch it,\u201d Rosenfeld said. \u201cThat\u2019s a problem.\u201d<\/p>\n<p>Stephanie Kelley, an actor and commercial producer, said concerns about AI are firmly on the minds of just about all of her colleagues.\u00a0<\/p>\n<p>\u201cIt\u2019s impacting every aspect of entertainment,\u201d she said. \u201cI can have fifty crew members on set, that\u2019s fifty people who are making their money and their livelihood doing this work and they\u2019re seeing AI take away some of their jobs.\u201d<\/p>\n<p>In the survey of counselors, 88% expected AI would continue to produce anxiety about employment stability.\u00a0 Fewer of them, about 46%, said AI agents in the workplace would tend to make us more productive.<\/p>\n<p>\u201cThis is something, as an actor and a writer, that I am very conflicted on,\u201d said Kat Hughes, a performing artist who recently used an AI video generation tool to help perfect a scene in one of her video shorts.\u00a0She said she fears the ways AI might one day be used to replace humans, but she also recognizes how the technology could allow smaller film makers to compete with much larger media companies.<\/p>\n<p>\u201cLet\u2019s say you want a crane shot for your movie, that\u2019s not in the budget.\u00a0Now there is an opportunity,\u201d Hughes said.\u00a0<\/p>\n<p>Could Chatbots Become Conscious?<\/p>\n<p>Mental health experts who took the surveys expressed lots of concerns about AI\u2019s impact on mental health, but they were not concerned that AI might become sentient.\u00a0 More than 80 percent of the psychiatrists and counselors polled said AI agents that communicate and behave like humans will never achieve anything comparable to human consciousness.\u00a0<\/p>\n<p>Perhaps not surprisingly, leaders for both the APA and ACA suggested mental health professionals need not be concerned their jobs might one day be taken away by chatbots.<\/p>\n<p>\u201cI don\u2019t think that AI will replace clinicians, but I think those clinicians who use AI will replace clinicians who do not,\u201d Wills said.\u00a0 \u201cWe need to be developing tools and technologies where there is always an element of human oversight.\u201d<\/p>\n<p>\u201cAI is a tool that can help leverage what we\u2019re doing,\u201d Uwamahoro said, \u201cbut it is never going to replace the work that we do.\u201d<\/p>\n<p>Full survey of counselors<\/p>\n<p>Full survey of psychiatrists <\/p>\n<p>Survey Methodologies<\/p>\n<p>Survey of American Psychiatric Association (APA) Members:<\/p>\n<p>The survey of psychiatrists was distributed via emails sent out to 22,143 active, practicing members of the APA across the US and Canada.\u00a0 2,068 psychiatrists provided responses to the poll which was conducted between January 10th\u00a0and February 1st\u00a0of 2026 with a margin of error +\/- 2% and a 9% response rate.\u00a0 The APA is a professional membership organization, whose mission is \u201cto champion psychiatrists\u2019 medical leadership in advancing mental health and delivering high-quality care to improve patients\u2019 lives.\u201d<\/p>\n<p>Survey of American Counseling Association (ACA) Members:<\/p>\n<p>The survey of professional counselors was distributed via emails sent out to 50,721 active members of the ACA.\u00a0 773 counselors provided responses to the poll, which was conducted between January 8th\u00a0and February 4th\u00a0of 2026 with a margin of error +\/- 3% and a 2% response rate. \u00a0The ACA is a professional membership organization whose mission is to \u201cadvance the counseling profession, mental health and well-being through education, advocacy, community inclusion and research.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Millions of Americans now converse with AI chatbots each day.\u00a0We are talking with machines about travel plans, politics,&hellip;\n","protected":false},"author":2,"featured_media":127357,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[66,4484,9,24,55,54,56,282],"class_list":{"0":"post-127356","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-new-york-city","8":"tag-artificial-intelligence","9":"tag-i-team","10":"tag-new-york","11":"tag-new-york-city","12":"tag-new-york-city-headlines","13":"tag-new-york-city-news","14":"tag-ny","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts\/127356","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/comments?post=127356"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/posts\/127356\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/media\/127357"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/media?parent=127356"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/categories?post=127356"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us-ny\/wp-json\/wp\/v2\/tags?post=127356"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}