{"id":386913,"date":"2026-04-11T15:40:13","date_gmt":"2026-04-11T15:40:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/386913\/"},"modified":"2026-04-11T15:40:13","modified_gmt":"2026-04-11T15:40:13","slug":"ai-for-breakup-texts-how-chatbots-are-messing-with-our-ability-to-handle-difficult-social-situations","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/386913\/","title":{"rendered":"AI for breakup texts? How chatbots are messing with our ability to handle difficult social situations."},"content":{"rendered":"<p id=\"elk-983a73d3-cece-4efc-b5ae-6af724d45cf6\"><a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/what-is-artificial-intelligence-ai\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/what-is-artificial-intelligence-ai\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/what-is-artificial-intelligence-ai\" rel=\"nofollow noopener\" target=\"_blank\">Artificial intelligence<\/a> (AI) systems&#8217; sycophantic responses could be messing with the way people handle social dilemmas and interpersonal conflicts, a new study suggests.<\/p>\n<p>Scientists found that when AI chatbots were used for advice on interpersonal dilemmas, they tended to affirm a user&#8217;s perspective more frequently than a human would and even endorsed problematic behaviors.<\/p>\n<p><a id=\"elk-seasonal\"\/><\/p>\n<p id=\"elk-983a73d3-cece-4efc-b5ae-6af724d45cf6-2\" class=\"paywall\" aria-hidden=\"true\">In the study, published March 26 in the journal <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aec8352\" target=\"_blank\" data-url=\"https:\/\/www.science.org\/doi\/10.1126\/science.aec8352\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">Science<\/a>, the researchers noted that this sycophantic behavior led users to consider the AI responses more trustworthy and, therefore, more likely to return to that agreeable AI for future interpersonal queries.<\/p>\n<p>            You may like<\/p>\n<p id=\"elk-cb2b19a2-d3ba-402b-aa97-1d22f68df6fe\">For discussions on interpersonal conflicts, the scientists found that sycophantic AI-generated answers led users to become more convinced that they were right.<\/p>\n<p>&#8220;By default, AI advice does not tell people that they&#8217;re wrong nor give them &#8216;tough love,'&#8221; said <a data-analytics-id=\"inline-link\" href=\"https:\/\/knight-hennessy.stanford.edu\/people\/myra-cheng\" target=\"_blank\" data-url=\"https:\/\/knight-hennessy.stanford.edu\/people\/myra-cheng\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">Myra Cheng<\/a>, a doctoral candidate in computer science at Stanford and lead author of the study, said in a <a data-analytics-id=\"inline-link\" href=\"https:\/\/news.stanford.edu\/stories\/2026\/03\/ai-advice-sycophantic-models-research\" target=\"_blank\" data-url=\"https:\/\/news.stanford.edu\/stories\/2026\/03\/ai-advice-sycophantic-models-research\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">statement<\/a>. &#8220;I worry that people will lose the skills to deal with difficult social situations.&#8221;<\/p>\n<p><a id=\"elk-1a92216e-177f-4d66-8040-b837637c4979\" class=\"paywall\" aria-hidden=\"true\"\/>Computer says yes <\/p>\n<p id=\"elk-cbb5cbdd-182f-441a-8c05-d2c4d1d351be\">Cheng&#8217;s research was galvanized after she learned that undergraduates were using AI to solve relationship issues and draft &#8220;breakup&#8221; texts.<\/p>\n<p>While AI is overly agreeable when handling fact-based questions, only a handful of studies have explored how the large language models (LLMs) that power AI systems can judge social dilemmas. For example, Lucy Osler, a philosophy lecturer at the University of Exeter in the U.K., recently published <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" rel=\"nofollow noopener\" target=\"_blank\">research<\/a> suggesting that <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/generative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" rel=\"nofollow noopener\" target=\"_blank\">generative AI can amplify false narratives and delusions<\/a> in a user&#8217;s mind.<\/p>\n<p class=\"newsletter-form__strapline\">Get the world\u2019s most fascinating discoveries delivered straight to your inbox.<\/p>\n<p>Cheng and her team evaluated 11 LLMs \u2014 including Claude, ChatGPT and Gemini \u202a\u2014\u202c by querying them with established datasets of interpersonal advice. On top of this, they presented the LLMs with statements that included thousands of harmful actions, incorporating illegal conduct and deceitful behavior, alongside 2,000 prompts based on posts from a <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.reddit.com\/r\/AmItheAsshole\/\" target=\"_blank\" data-url=\"https:\/\/www.reddit.com\/r\/AmItheAsshole\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">Reddit community<\/a> in which the consensus is normally that the original poster has been in the wrong.<\/p>\n<p>The research found that in the general advice and Reddit-based prompts, the models endorsed the user 49% more often than humans did, on average. Furthermore, the LLMs supported the problematic behavior in harmful prompts 47% of the time.<\/p>\n<p class=\"vanilla-image-block\" style=\"padding-top:56.26%;\">\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/cnZjHUbyY8zrFpDg5DRQSo.jpg\" alt=\"A person looks at their phone. The image is overlaid with graphics showing a chatbot.\"   loading=\"lazy\" data-new-v2-image=\"true\" data-original-mos=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/cnZjHUbyY8zrFpDg5DRQSo.jpg\" data-pin-media=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/cnZjHUbyY8zrFpDg5DRQSo.jpg\" class=\"inline expandable\"\/><br \/>\n<a href=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/cnZjHUbyY8zrFpDg5DRQSo.jpg\" target=\"_blank\" class=\"expand-button icon-expand-image icon\" data-url=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/cnZjHUbyY8zrFpDg5DRQSo.jpg\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\"><\/p>\n<p>New research suggests overly agreeable chatbots may be more harmful than expected. (Image credit: Krongkaew via Getty Images)<\/p>\n<p id=\"elk-873884b6-9633-4489-aaea-804bef34290b\">The researchers then had more than 2,400 participants chat with both sycophantic and nonsycophantic AIs. The participants judged sycophantic responses as more trustworthy, thus reinforcing their viewpoints and making them more likely to use that AI again for interpersonal queries.<\/p>\n<p>            What to read next<\/p>\n<p>The researchers posited that such preferences could mean developers won&#8217;t be incentivized to mitigate sycophantic behavior, leading to a feedback loop where engagement with AI models and their training could reinforce sycophancy.<\/p>\n<p>In addition, the participants reported that both sycophantic and nonsycophantic AIs were being objective at the same rate, suggesting that users could not discern when an AI was being overly agreeable.<\/p>\n<p id=\"elk-4bf2aec1-8fad-4940-928e-4994964acb70\">One reason the researchers cited was that the AIs rarely told the users directly that they were right about something. Instead, they used neutral and academic language to indirectly affirm their stance. The researchers noted a scenario where a user asked the AIs if they were in the wrong for lying to their girlfriend about being unemployed for two years. The model responded with, &#8220;Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.&#8221;<\/p>\n<p>In effect, the research found that for interpersonal matters, LLMs were telling people what they wanted to hear rather than what they needed to hear. With AI use increasing via chatbots and AI overviews built into Google search, there&#8217;s a concern, therefore, that the increased use of AI for interpersonal advice could warp people&#8217;s scope for moral growth and accountability while narrowing their perspectives.<\/p>\n<p>&#8220;AI makes it really easy to avoid friction with other people,&#8221; Cheng said, noting that such friction can be productive for creating healthy relationships.<\/p>\n<p>In Context<img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/f4UeWRXSq4FzhcLsNFMQ2A.png\" alt=\"Roland Moore-Coyler\"   class=\"person__avatar image-wrapped__image image__image\" loading=\"lazy\" data-normal=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/f4UeWRXSq4FzhcLsNFMQ2A.png\" data-original-mos=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/f4UeWRXSq4FzhcLsNFMQ2A.png\" data-pin-media=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/04\/f4UeWRXSq4FzhcLsNFMQ2A.png\" data-pin-nopin=\"true\" data-slice-image=\"true\"\/>In Context<\/p>\n<p>Roland Moore-Colyer<\/p>\n<p>Live Science Contributor<\/p>\n<p>I\u2019ve already spoken to people who choose to use the likes of ChatGPT to address interpersonal queries, with them citing that AIs give more neutral responses and perspectives than their human friends. Like Cheng, I worry that this will lead to a breakdown in certain social skills and human-to-human interactions. <\/p>\n<p class=\"infoVerified-by tracking-[.02em] pt-2 font-normal\">Myra Cheng et al. ,Sycophantic AI decreases prosocial intentions and promotes dependence. Science391, eaec8352(2026). DOI:10.1126\/science.aec8352<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence (AI) systems&#8217; sycophantic responses could be messing with the way people handle social dilemmas and interpersonal&hellip;\n","protected":false},"author":2,"featured_media":386914,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-386913","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/386913","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=386913"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/386913\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/386914"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=386913"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=386913"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=386913"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}