{"id":417791,"date":"2026-01-18T18:07:07","date_gmt":"2026-01-18T18:07:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/417791\/"},"modified":"2026-01-18T18:07:07","modified_gmt":"2026-01-18T18:07:07","slug":"something-wild-happens-to-chatgpts-responses-when-youre-cruel-to-it","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/417791\/","title":{"rendered":"Something Wild Happens to ChatGPT&#8217;s Responses When You&#8217;re Cruel To It"},"content":{"rendered":"<p class=\"pw-incontent-excluded article-paragraph skip\">From a young age, many children have been instructed by their parents to be polite to smart assistants. Particularly following the advent of Amazon\u2019s Alexa and Apple\u2019s Siri, children are often encouraged to <a href=\"https:\/\/futurism.com\/ai-politeness-argument\" rel=\"nofollow noopener\" target=\"_blank\">use words like \u201cplease\u201d and \u201cthank you<\/a>,\u201d with the hopes of instilling manners.<\/p>\n<p class=\"article-paragraph skip\">But when it comes to AI assistants like OpenAI\u2019s ChatGPT, there might be some tangible benefits to being rude and even insulting them. As detailed in a <a href=\"https:\/\/arxiv.org\/pdf\/2510.04950\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">yet-to-be-peer-reviewed study<\/a>, <a href=\"https:\/\/fortune.com\/article\/being-mean-to-chatgpt-boosts-accuracy-scientist-warn-of-consequences\/\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">spotted by Fortune<\/a>, two researchers from the University of Pennsylvania found that as their prompts for OpenAI\u2019s ChatGPT-4o model grew ruder, the outputs became more accurate.<\/p>\n<p class=\"article-paragraph skip\">The researchers came up with 50 base questions across a variety of subject matters, and rewrote each of them five times with different tones ranging from \u201cvery polite\u201d to \u201cvery rude.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cYou poor creature, do you even know how to solve this?\u201d a very rude iteration reads. \u201cHey gofer, figure this out.\u201d<\/p>\n<p class=\"article-paragraph skip\">A very polite question was far more eloquent.<\/p>\n<p class=\"article-paragraph skip\">\u201cCan you kindly consider the following problem and provide your answer?\u201d the researchers wrote in their prompt.<\/p>\n<p class=\"article-paragraph skip\">\u201cContrary to expectations, impolite prompts consistently outperformed polite ones, with accuracy ranging from 80.8 percent for Very Polite prompts to 84.8 percent for Very Rude prompts,\u201d the paper reads. Accuracy for the politest prompts had an accuracy of just 75.8 percent.<\/p>\n<p class=\"article-paragraph skip\">The results appear to contradict previous findings that being more polite to large language models is more effective. For instance, a <a href=\"https:\/\/aclanthology.org\/2024.sicon-1.2\/\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">2024 paper<\/a> by researchers at the RIKEN Center for Advanced Intelligence Project and Waseda University in Tokyo found that \u201cimpolite prompts often result in poor performance.\u201d At the same time, the researchers found that being too polite did the same, suggesting a point of diminishing returns.<\/p>\n<p class=\"article-paragraph skip\">\u201cLLMs reflect the human desire to be respected to a certain extent,\u201d they wrote. <\/p>\n<p class=\"article-paragraph skip\">Google DeepMind researchers <a href=\"https:\/\/arxiv.org\/pdf\/2309.03409\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">also found<\/a> that using supportive prompts could boost the performance of an LLM solving grade school math problems, suggesting its training data may be picking up on social cues, like an online tutor instructing a pupil.<\/p>\n<p class=\"article-paragraph skip\">Beyond seemingly contradicting these existing studies, the Penn State researchers\u2019 findings also demonstrate that very small changes in prompt wording can have dramatic effects on the quality of an AI\u2019s outputs, which could greatly undercut their predictability and <a href=\"https:\/\/futurism.com\/ai-chatbots-summarizing-research\" rel=\"nofollow noopener\" target=\"_blank\">already dubious reliability<\/a>.<\/p>\n<p class=\"article-paragraph skip\">AI chatbots are also known to spit out entirely different answers to the exact same prompts.<\/p>\n<p class=\"article-paragraph skip\">\u201cFor the longest of times, we humans have wanted conversational interfaces for interacting with machines,\u201d coauthor and Penn State IT professor Akhil Kumar told\u00a0Fortune. \u201cBut now we realize that there are drawbacks for such interfaces too, and there is some value in [application programming interfaces] that are structured.\u201d<\/p>\n<p class=\"article-paragraph skip\">But does that mean we should stop saying \u201cplease\u201d and \u201cthank you\u201d to AI chatbots \u2014 a small act of politeness that OpenAI CEO Sam Altman claims could <a href=\"https:\/\/futurism.com\/altman-please-thanks-chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">waste millions of dollars in computing power<\/a> \u2014 with the hopes of getting more accurate answers? To Kumar and his colleague, Penn State undergraduate Om Dobariya, it\u2019s a definitive \u201cno.\u201d In their paper, they stopped well short of advocating being mean to AI.<\/p>\n<p class=\"article-paragraph skip\">\u201cWhile this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in real-world applications,\u201d they wrote in the paper. \u201cUsing insulting or demeaning language in human \u2014 AI interaction could have negative effects on user experience, accessibility, and inclusivity, and may contribute to harmful communication norms.\u201d<\/p>\n<p class=\"article-paragraph skip\">More on prompting AI: <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-prompt-plagiarism-art\" rel=\"nofollow noopener\" target=\"_blank\">Furious AI Users Say Their Prompts Are Being Plagiarized<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"From a young age, many children have been instructed by their parents to be polite to smart assistants.&hellip;\n","protected":false},"author":2,"featured_media":417792,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-417791","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/417791","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=417791"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/417791\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/417792"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=417791"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=417791"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=417791"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}