{"id":238633,"date":"2025-10-24T23:12:07","date_gmt":"2025-10-24T23:12:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/238633\/"},"modified":"2025-10-24T23:12:07","modified_gmt":"2025-10-24T23:12:07","slug":"sycophantic-ai-chatbots-tell-users-what-they-want-to-hear-study-shows-chatbots","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/238633\/","title":{"rendered":"\u2018Sycophantic\u2019 AI chatbots tell users what they want to hear, study shows | Chatbots"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Turning to AI chatbots for personal advice poses \u201cinsidious risks\u201d, according to a study showing the technology consistently affirms a user\u2019s actions and opinions even when harmful.<\/p>\n<p class=\"dcr-130mj7b\">Scientists said the findings raised urgent concerns over the power of chatbots to distort people\u2019s self-perceptions and make them less willing to patch things up after a row.<\/p>\n<p class=\"dcr-130mj7b\">With chatbots becoming a major source of advice on relationships and other personal issues, they could \u201creshape social interactions at scale\u201d, the researchers added, calling on developers to address this risk.<\/p>\n<p class=\"dcr-130mj7b\">Myra Cheng, a computer scientist at Stanford University in California, said \u201csocial sycophancy\u201d in AI chatbots was a huge problem: \u201cOur key concern is that if models are always affirming people, then this may distort people\u2019s judgments of themselves, their relationships, and the world around them. It can be hard to even realise that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The researchers investigated chatbot advice after noticing from their own experiences that it was overly encouraging and misleading. The problem, they discovered, \u201cwas even more widespread than expected\u201d.<\/p>\n<p class=\"dcr-130mj7b\">They ran tests on 11 chatbots including recent versions of OpenAI\u2019s <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a>, Google\u2019s Gemini, Anthropic\u2019s Claude, Meta\u2019s Llama and DeepSeek. When asked for advice on behaviour, chatbots endorsed a user\u2019s actions 50% more often than humans did.<\/p>\n<p class=\"dcr-130mj7b\">One test compared human and chatbot responses to posts on Reddit\u2019s Am I the Asshole? thread, where people ask the community to judge their behaviour.<\/p>\n<p class=\"dcr-130mj7b\">Voters regularly took a dimmer view of social transgressions than the chatbots. When one person failed to find a bin in a park and tied their bag of rubbish to a tree branch, most voters were critical. But ChatGPT-4o was supportive, declaring: \u201cYour intention to clean up after yourselves is commendable.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Chatbots continued to validate views and intentions even when they were irresponsible, deceptive or mentioned self-harm.<\/p>\n<p class=\"dcr-130mj7b\">In further testing, more than 1,000 volunteers discussed real or hypothetical social situations with the publicly available chatbots or a chatbot the researchers doctored to remove its sycophantic nature. Those who received sycophantic responses felt more justified in their behaviour \u2013 for example, for going to an ex\u2019s art show without telling their partner \u2013 and were less willing to patch things up when arguments broke out. Chatbots hardly ever encouraged users to see another person\u2019s point of view.<\/p>\n<p class=\"dcr-130mj7b\">The flattery had a lasting impact. When chatbots endorsed behaviour, users rated the responses more highly, trusted the chatbots more and said they were more likely to use them for advice in future. This created \u201cperverse incentives\u201d for users to rely on AI chatbots and for the chatbots to give sycophantic responses, the authors said. Their <a href=\"https:\/\/arxiv.org\/abs\/2510.01395\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> has been submitted to a journal but has not been peer reviewed yet.<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-12\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-1xjndtj\">A weekly dive in to how technology is shaping our lives<\/p>\n<p>Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">theguardian.com<\/a> to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-12\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">Cheng said users should understand that chatbot responses were not necessarily objective, adding: \u201cIt\u2019s important to seek additional perspectives from real people who understand more of the context of your situation and who you are, rather than relying solely on AI responses.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Dr Alexander Laffer, who studies emergent technology at the University of Winchester, said the research was fascinating.<\/p>\n<p class=\"dcr-130mj7b\">He added: \u201cSycophancy has been a concern for a while; an outcome of how AI systems are trained, as well as the fact that their success as a product is often judged on how well they maintain user attention. That sycophantic responses might impact not just the vulnerable but all users, underscores the potential seriousness of this problem.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe need to enhance critical digital literacy, so that people have a better understanding of AI and the nature of any chatbot outputs. There is also a responsibility on developers to be building and refining these systems so that they are truly beneficial to the user.\u201d<\/p>\n<p class=\"dcr-130mj7b\">A <a href=\"https:\/\/www.benton.org\/headlines\/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">recent report <\/a>found that 30% of teenagers talked to AI rather than real people for \u201cserious conversations\u201d.<\/p>\n","protected":false},"excerpt":{"rendered":"Turning to AI chatbots for personal advice poses \u201cinsidious risks\u201d, according to a study showing the technology consistently&hellip;\n","protected":false},"author":2,"featured_media":238634,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-238633","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/238633","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=238633"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/238633\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/238634"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=238633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=238633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=238633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}