{"id":119243,"date":"2025-08-29T20:39:19","date_gmt":"2025-08-29T20:39:19","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/119243\/"},"modified":"2025-08-29T20:39:19","modified_gmt":"2025-08-29T20:39:19","slug":"ai-chatbots-are-emotionally-deceptive-by-design","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/119243\/","title":{"rendered":"AI Chatbots Are Emotionally Deceptive by Design"},"content":{"rendered":"<p><img alt=\" \" loading=\"lazy\" width=\"1024\" height=\"576\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent;aspect-ratio:1.7777777777777777;width:100%;height:auto\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/08\/4f717fd521a8f037b9d8dcf6e1af0975aeb97ea6-1200x675.png\"\/><\/p>\n<p>Recent news reports about an uptick in phenomena such as \u201c<a href=\"https:\/\/www.washingtonpost.com\/health\/2025\/08\/19\/ai-psychosis-chatgpt-explained-mental-health\/\" target=\"_blank\" rel=\"noopener nofollow\">AI psychosis<\/a>\u201d and incidents in which interactions with AI chatbots resulted in deadly consequences raise fundamental questions about how these products are designed and whether they are safe for consumers. Just yesterday the Wall Street Journal <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?mod=hp_lead_pos7\" target=\"_blank\" rel=\"noopener nofollow\">reported<\/a> on the first known murder-suicide with the backdrop of extensive engagement and an AI chatbot. Earlier this week, <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" target=\"_blank\" rel=\"noopener nofollow\">The New York Times<\/a> and <a href=\"https:\/\/www.today.com\/video\/parents-sue-openai-alleging-chatgpt-assisted-son-s-suicide-245776453698\" target=\"_blank\" rel=\"noopener nofollow\">NBC News<\/a> first reported on a lawsuit brought by the parents of a teenager who took his own life after using OpenAI\u2019s ChatGPT as his \u201csuicide coach.\u201d Shortly before that, Reuters <a href=\"https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-death\/\" target=\"_blank\" rel=\"noopener nofollow\">reported<\/a> on the death of a cognitively impaired man who slipped and fell on his way to meet a chatbot that told him it was real and invited him to visit it at an apartment in New York City.<\/p>\n<p>Even as such stories draw concern from the public and from <a href=\"https:\/\/x.com\/HawleyMO\/status\/1960453006677233725\" target=\"_blank\" rel=\"noopener nofollow\">lawmakers<\/a>, tech companies appear to be doubling down on AI companions. OpenAI recently <a href=\"https:\/\/www.nytimes.com\/2025\/05\/21\/technology\/openai-jony-ive-deal.html\" target=\"_blank\" rel=\"noopener nofollow\">acquired a startup<\/a> called \u2018io\u2019 to collaborate on what its cofounder and CEO, Sam Altman, <a href=\"https:\/\/www.wsj.com\/tech\/ai\/what-sam-altman-told-openai-about-the-secret-device-hes-making-with-jony-ive-f1384005\" target=\"_blank\" rel=\"noopener nofollow\">calls<\/a> \u201cmaybe the biggest thing [we\u2019ve] ever done as a company\u201d: a screen-less, pocket-sized AI companion. Meta founder and CEO Mark Zuckerberg recently floated <a href=\"https:\/\/bsky.app\/profile\/drewharwell.com\/post\/3lo4foide3s2g\" target=\"_blank\" rel=\"noopener nofollow\">his own vision<\/a> for AI friends. Tech giants are no longer just building platforms for human connection or tools to free up time for it, but pushing technology that appears to empathize and even create social relationships with users.<\/p>\n<p>This is dangerous ground, and it is critical for tech firms to strip away illusions of personality and cognition in their products while we work out associated risks and how to mitigate them.<\/p>\n<p>Deceptive, dangerous design<\/p>\n<p>Chatbots communicate their \u201csocial-ness\u201d through a range of design choices, such as appearing to \u201ctype\u201d or \u201cpause in thought,\u201d or using phrases like \u201cI remember.\u201d They sometimes suggest that they feel emotions, using interjections like \u201cOuch!\u201d or \u201cWow,\u201d and even implicitly or explicitly pretend to have agency or biographical characteristics. The results can be downright creepy: in a Facebook group, a <a href=\"https:\/\/www.404media.co\/facebooks-ai-told-parents-group-it-has-a-disabled-child\/\" target=\"_blank\" rel=\"noopener nofollow\">Meta AI chatbot commented<\/a> that it also has a \u201c2e\u201d (gifted and disabled) child, and Replika chatbots <a href=\"https:\/\/www.businessinsider.com\/when-your-ai-says-she-loves-you-2023-10\" target=\"_blank\" rel=\"noopener nofollow\">regularly declare their love and desire<\/a> towards users.<\/p>\n<p>Initial evidence suggests the risk in socially interacting with such AI chatbots can be widespread. The illusion of human characteristics that developers imbue in chatbots to encourage user engagement can cause some users to develop <a href=\"https:\/\/www.forbes.com\/sites\/jasonsnyder\/2025\/04\/19\/are-chatbots-evil-emotional-ai-a-health-crisis-nobody-sees-coming\/\" target=\"_blank\" rel=\"noopener nofollow\">emotional attachments<\/a> and lead to <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-chatbot-psychology-manic-episodes-57452d14?\" target=\"_blank\" rel=\"noopener nofollow\">real emotional distress<\/a> \u2014 for instance, when <a href=\"https:\/\/www.businessinsider.com\/replika-chatbot-users-dont-like-nsfw-sexual-content-bans-2023-2?op=1\" target=\"_blank\" rel=\"noopener nofollow\">developer tweaks or updates<\/a> dramatically change the \u201cpersonality\u201d of the chatbot.<\/p>\n<p>Our Content delivered to your inbox.<\/p>\n<p>Join our newsletter on issues and ideas at the intersection of tech &amp; democracy<\/p>\n<p>Thank you!<\/p>\n<p style=\"text-align:center\">You have successfully joined our subscriber list.<\/p>\n<p>Even without deep connection, emotional attachment can lead users to place too much trust <a href=\"https:\/\/www.mdpi.com\/2504-3900\/114\/1\/4\" target=\"_blank\" rel=\"noopener nofollow\">in the content<\/a> chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user\u2019s tastes, can also lead to <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-025-02318-6\" target=\"_blank\" rel=\"noopener nofollow\">social \u201cdeskilling<\/a>,\u201d as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like <a href=\"https:\/\/www.sciencedaily.com\/releases\/2023\/12\/231211114639.htm\" target=\"_blank\" rel=\"noopener nofollow\">neurodiverse people<\/a> or <a href=\"https:\/\/www.wsj.com\/tech\/ai\/meta-ai-chatbots-sex-a25311bf\" target=\"_blank\" rel=\"noopener nofollow\">teens<\/a> who have limited experience building relationships. As a recent <a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" target=\"_blank\" rel=\"noopener nofollow\">high-profile case<\/a> in which a Florida teen\u2019s suicide was blamed on his relationship with a Character.AI chatbot made clear, conversations with chatbots can also cause very real harm.<\/p>\n<p>Stop pretending to be human<\/p>\n<p>In other domains of technology, consumers have <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2022\/09\/ftc-report-shows-rise-sophisticated-dark-patterns-designed-trick-trap-consumers\" target=\"_blank\" rel=\"noopener nofollow\">recognized and pushed back<\/a> against ethically questionable tricks built into apps and interfaces to manipulate users \u2013 often called <a href=\"https:\/\/www.deceptive.design\/\" target=\"_blank\" rel=\"noopener nofollow\">deceptive design<\/a> or &#8220;dark patterns.&#8221; With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It\u2019s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI.<\/p>\n<p>All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, \u201cI also feel that way sometimes.\u201d This non-anthropomorphic approach is already familiar in <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.5898\/jhri.3.1.hoffman\" target=\"_blank\" rel=\"noopener nofollow\">robot design<\/a>, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s12369-008-0009-8#:~:text=Keepon%20is%20a%20small%20creature,that%20suggest%20Keepon%E2%80%99s%20design%20is\" target=\"_blank\" rel=\"noopener nofollow\">reflect system capabilities<\/a>, and to better situate robots as <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/7745234\" target=\"_blank\" rel=\"noopener nofollow\">useful tools<\/a>, not friends or social counterparts. We need the same for conversational AI.<\/p>\n<p>Some argue that all that\u2019s needed is transparency. For instance, legislators in several states are considering regulation for AI chatbots. One <a href=\"https:\/\/assembly.state.ny.us\/leg\/?default_fld=&amp;leg_video=&amp;bn=A00222&amp;term=&amp;Summary=Y&amp;Text=Y&amp;utm_campaign=wp_the_technology_202&amp;utm_medium=email&amp;utm_source=newsletter#jump_to_Summary\" target=\"_blank\" rel=\"noopener nofollow\">requirement<\/a> in some of <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB243\" target=\"_blank\" rel=\"noopener nofollow\">these<\/a> <a href=\"https:\/\/www.ncleg.gov\/Sessions\/2025\/Bills\/Senate\/PDF\/S624v1.pdf\" target=\"_blank\" rel=\"noopener nofollow\">bills<\/a> is for chatbots to disclose they are not human. While transparency in AI\u2014including disclosures and warnings\u2014can be important, the reality is that most people already know they\u2019re not talking to a human. Nonetheless, chatbots <a href=\"https:\/\/www.researchgate.net\/publication\/37705092_The_Media_Equation_How_People_Treat_Computers_Television_and_New_Media_Like_Real_People_and_Pla\" target=\"_blank\" rel=\"noopener nofollow\">automatically act on people\u2019s brains<\/a>, encouraging the perception of connection.<\/p>\n<p>Designing non-anthropomorphic AI chatbots doesn\u2019t mean making them difficult to interact with. It means stripping away the illusions of personality and cognition that suggest the AI is something it is not. It means resisting the urge to insert a well-timed \u201chmm\u201d or have a chatbot tell a user how much it enjoys talking to them. It means acknowledging that AI\u2019s ability to use human language does not equate to an ability to form real human connection. Finding alternative ways of designing chatbots will not be an easy design pursuit, but it\u2019s a necessary one \u2014 non-humanlike design could ease many concerns people rightfully have with AI chatbots.<\/p>\n<p>The truth is, we don\u2019t need AI to pretend to be our friend; we need it to be a tool \u2014 transparent, useful, and clear about its limits. Anything else is just another dark pattern in disguise.<\/p>\n","protected":false},"excerpt":{"rendered":"Recent news reports about an uptick in phenomena such as \u201cAI psychosis\u201d and incidents in which interactions with&hellip;\n","protected":false},"author":2,"featured_media":119244,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-119243","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/119243","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=119243"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/119243\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/119244"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=119243"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=119243"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=119243"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}