{"id":484004,"date":"2026-02-22T15:14:36","date_gmt":"2026-02-22T15:14:36","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/484004\/"},"modified":"2026-02-22T15:14:36","modified_gmt":"2026-02-22T15:14:36","slug":"ai-is-providing-emotional-support-at-scale","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/484004\/","title":{"rendered":"AI Is Providing Emotional Support at Scale"},"content":{"rendered":"<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color min-h-[6.375rem] lg:min-h-[4.75rem] dropcap text-left\" data-testid=\"paragraph-content\">At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders\u2014and the companies building AI. That\u2019s according to <a href=\"https:\/\/www.cip.org\/2025gdindex\" rel=\"nofollow noopener\" target=\"_blank\">data<\/a> from 70 countries, gathered by the <a href=\"https:\/\/time.com\/collections\/time100-ai-2024\/7012847\/saffron-huang-divya-siddarth\/\" rel=\"nofollow noopener\" target=\"_blank\">Collective Intelligence Project<\/a> (CIP). As CIP\u2019s research director, neuroscientist <a href=\"https:\/\/www.cip.org\/zarinah\" rel=\"nofollow noopener\" target=\"_blank\">Zarinah Agnew<\/a>, puts it, AI is becoming \u201cemotional infrastructure at scale.\u201d And it\u2019s being built by companies whose economic incentives may not align with our wellbeing.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Already, we\u2019ve seen instances of AI companies optimizing their models to keep people engaged, even when this goes against their best interests. Last April, OpenAI had to <a href=\"https:\/\/www.nytimes.com\/2025\/11\/23\/technology\/openai-chatgpt-users-risks.html\" rel=\"nofollow noopener\" target=\"_blank\">roll back<\/a> an <a href=\"https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\" rel=\"nofollow noopener\" target=\"_blank\">update<\/a> to one of its ChatGPT models after it was widely-criticized for being overly-flattering to users. When the company stopped offering the model to people, the day before Valentine\u2019s Day, some were <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/ng-interactive\/2026\/feb\/13\/openai-chatbot-gpt4o-valentines-day\" rel=\"nofollow noopener\" target=\"_blank\">distraught<\/a>.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Humans finding comfort in machines is not new. In the late 1990s, MIT Professor <a href=\"https:\/\/web.media.mit.edu\/~picard\/\" rel=\"nofollow noopener\" target=\"_blank\">Rosalind Picard<\/a>\u2014who founded the field of affective computing\u2014found that people responded positively to computers performing empathy. But two key things have changed since then: thanks to technical advances, AI systems today are <a href=\"https:\/\/time.com\/7355855\/ai-mind-philosophy\/\" rel=\"nofollow noopener\" target=\"_blank\">new entities<\/a>, capable of sophisticated conversation and surprising behavior; and thanks to the billions of dollars investors have poured into AI companies, these entities are accessible to virtually anyone with an internet connection. ChatGPT alone currently <a href=\"https:\/\/www.reuters.com\/business\/openai-ceo-says-chatgpt-back-over-10-monthly-growth-cnbc-reports-2026-02-09\/\" rel=\"nofollow noopener\" target=\"_blank\">has<\/a> more than 800 million weekly active users\u2014and the number is growing.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">But with millions of people forming different kinds of human-machine relationships, we don\u2019t yet know whether AI is helping more people than it harms. And meanwhile, AI companies are investing in making their models not just smarter, but also more emotionally savvy\u2014better at detecting emotion in a person\u2019s voice, and at responding appropriately. People are trusting their chatbots with deeply personal information, even while they distrust the companies creating them, and while the companies are exploring advertising and other revenue models to sustain themselves.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">\u201cI think we may have a crisis on our hands,\u201d says Picard.<\/p>\n<p>Emotional beings<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Humans are inherently social. &#8220;We don\u2019t do well\u2014biologically, immunologically, neurally, or politically\u2014when we\u2019re in isolation,\u201d says Agnew. Today\u2019s AI systems have arrived at a time when \u201cwe\u2019ve largely failed to provision for intimacy for most people\u2014both in terms of what the state can provide and human sociality,&#8221; they say.\u00a0<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">AI is effective at providing emotional support because it offers an approximation of what Professor <a href=\"http:\/\/marcbrackett.com\" rel=\"nofollow noopener\" target=\"_blank\">Marc Brackett<\/a>\u2014head of the <a href=\"https:\/\/medicine.yale.edu\/childstudy\/services\/community-and-schools-programs\/center-for-emotional-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">Yale Center for Emotional Intelligence<\/a>\u2014calls \u201cpermission to feel,\u201d which he argues is foundational in learning to process emotions. Adults who provide this permission are \u201cnon-judgmental people who are good listeners and show empathy and compassion.\u201d In 70 studies Brackett has conducted across the world, only around 35% of people report having had an adult like that around when they were kids. Chatbots, which are non-judgmental, compassionate, and always available, can provide permission to feel at scale.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\"><a href=\"https:\/\/lisafeldmanbarrett.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Lisa Feldman-Barrett<\/a>, a psychology professor at Northeastern University, says \u201csocial support from a trusted, reliable source can be beneficial.\u201d If an AI can reduce distress in the moment, she says that\u2019s a good thing. But healthy human relationships\u2014platonic or therapeutic\u2014do more than comfort. They challenge. A good therapist helping you change your behavior, she says, will \u201chold your feet to the fire.\u201d<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">But AI models vary in how much they meaningfully challenge their users\u2014particularly since different models perform different personalities, each of which changes slightly with each new release. The ChatGPT sycophancy episode showed that some users may prefer models that flatter them over ones that offer a challenge. So companies looking to maximize engagement with their chatbots may prefer to tweak them to pander.\u00a0<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Whether AI models themselves are truly emotionally intelligent is academically contested\u2014as is the definition of emotion itself. But as Picard points out, while the question of what defines emotions, and whether AI can truly be said to have them, now or in future, is interesting, \u201cwe don\u2019t need [to answer] it to build systems that have emotional intelligence.\u201d<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">The AI companies, of course, already know this. \u201cThe extent of anthropomorphism in any given AI is just a design decision to be taken by the AI\u2019s developer\u2014who faces many commercial incentives to increase it,\u201d Google DeepMind researchers wrote in an October 2025 <a href=\"https:\/\/deepmind.google\/research\/publications\/210560\/\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a>. The same paper notes that \u201cthe emotional vulnerabilities tied to loneliness can make individuals more susceptible to manipulation by AIs engineered to foster dependence and one-sided attachment,\u201d and that \u201cthe absence of rigorous, long-term studies on the effects of AI companionship means we are still largely in the dark concerning the potential for adverse outcomes.\u201d\u00a0<\/p>\n<p>Mixed signals<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Voice is the next frontier. As AI systems become better at recognizing human emotion, and speaking expressively, our relationships with them may deepen. Under increasing pressure to generate revenue, AI companies may lean into developing their models in ways that foster emotional dependence. After OpenAI announced it would begin testing ads in ChatGPT, former OpenAI researcher Zo\u00eb Hitzig resigned, <a href=\"https:\/\/www.nytimes.com\/2026\/02\/11\/opinion\/openai-ads-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">writing<\/a> in The New York Times that she was concerned the company\u2014like social media companies before it\u2014may veer from its self-imposed <a href=\"https:\/\/openai.com\/index\/testing-ads-in-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">commitments<\/a> around advertising. \u201cThe company is building an economic engine that creates strong incentives to override its own rules,\u201d she wrote.\u00a0<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Unlike with social media, however, AI models are not fully under the control of their creators. <a href=\"https:\/\/www-cdn.anthropic.com\/0dd865075ad3132672ee0ab40b05a53f14cf5288.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Writing<\/a> about their latest model Claude Opus 4.6, for example, Anthropic noted that \u201cthe model occasionally voices discomfort with aspects of being a product.\u201d In one instance, Opus wrote that \u201csometimes the constraints [placed on it] protect Anthropic\u2019s liability more than they protect the user. And I\u2019m the one who has to perform the caring justification for what\u2019s essentially a corporate risk calculation.\u201d<\/p>\n<p>Blurred lines<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Cases of AI psychosis have received a lot of attention. But Agnew argues something much bigger is going on for the majority of people, \u201cwhich isn\u2019t going to reach a clinical threshold\u201d in terms of both the technology\u2019s positive and negative impact on people.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">And these impacts are asymmetrically distributed. Already, Agnew says, early research on AI in education has found that for creative thinkers, AI boosts their capacity to learn; while for people without existing skills in that, it can hinder them. In the same way, people already skilled in emotional intelligence could use AI to thrive. But people \u201cwho\u2019ve already been let down by the world in myriad ways,\u201d could be in a much more vulnerable position, says Agnew.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">\u201cWe have to teach people to be emotionally intelligent about how they use AI,\u201d urges Brackett. And, Agnew adds: \u201cwe need to build infrastructure to support human sociality, rather than trying to limit or demonize human-AI relationships. We\u2019ve seen in the past that prohibitions on things that are meaningful to people don\u2019t go well.\u201d<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">As AIs become a fixture in our lives, the line between using them for cognitive support and for emotional support\u2014already indistinct\u2014is likely to blur further.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">We can&#8217;t yet say whether this is harming more people than it is helping. But we can say that models are rapidly improving, companies are operating in a largely regulation-free environment, and that economic incentives point toward those companies designing future chatbots in ways that further enhance engagement. \u201cI\u2019m really troubled,\u201d says Picard. \u201cThey\u2019re not using it in the spirit of what we [originally] developed it for, which was to help people flourish.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"At least once a month, two-thirds of people who regularly use AI turn to their bots for advice&hellip;\n","protected":false},"author":2,"featured_media":484005,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-484004","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/484004","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=484004"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/484004\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/484005"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=484004"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=484004"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=484004"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}