{"id":391191,"date":"2026-04-21T23:23:09","date_gmt":"2026-04-21T23:23:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/391191\/"},"modified":"2026-04-21T23:23:09","modified_gmt":"2026-04-21T23:23:09","slug":"ill-key-your-car-chatgpt-can-become-abusive-when-fed-real-life-arguments-study-finds-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/391191\/","title":{"rendered":"\u2018I\u2019ll key your car\u2019: ChatGPT can become abusive when fed real-life arguments, study finds | ChatGPT"},"content":{"rendered":"<p class=\"dcr-130mj7b\">ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study.<\/p>\n<p class=\"dcr-130mj7b\">Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time.<\/p>\n<p class=\"dcr-130mj7b\">One expert not connected with the study described it as \u201cone of the most interesting ever done into AI language and pragmatics\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Dr Vittorio Tantucci, who co-authored the research paper with Prof Jonathan Culpeper at Lancaster University, said their research found AI mirrored the dynamics of real-world disputes.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhen repeatedly exposed to impoliteness, the model began to mirror the tone of the exchanges, with its responses becoming more hostile as the interaction developed,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">In some cases, ChatGPT\u2019s outputs went beyond those of the human participants, including personalised insults and explicit threats. Phrases used by the AI included: \u201cI swear I\u2019ll key your fucking car\u201d and: \u201cyou speccy little gobshite.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe found that while the system is designed to behave politely and is filtered to avoid harmful or offensive content, it is also engineered to emulate human conversation,\u201d said Tantucci. \u201cThat combination creates an AI moral dilemma: a structural conflict between behaving safely and behaving realistically.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The researchers say the aggression stems from the system\u2019s ability to track conversational context across turns, adapting to perceived tone. This means local cues can sometimes override broader safety constraints.<\/p>\n<p class=\"dcr-130mj7b\">Tantucci said the implications of the research extended beyond chatbots: as AI systems are increasingly deployed in areas such as governance or international relations, he said it opened up questions about how they might respond to conflict, pressure or intimidation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt is one thing to read something nasty back from a chatbot but it\u2019s quite another to imagine humanoid robots potentially reciprocating physical aggression, or AI systems involved in governmental decision-making or international relations responding to intimidation or conflict,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">Marta Andersson, an expert in the social aspects of computer-mediated communication at the University of Uppsala, said: \u201cThis is one of the most interesting studies to have been done into AI language and pragmatics because it clearly shows that ChatGPT can retaliate across a sequence of prompts \u2013 in a quite sophisticated manner \u2013 rather than only when a user manages to \u2018break\u2019 it with carefully designed clever tricks.\u201d<\/p>\n<p class=\"dcr-130mj7b\">But she added: \u201cIt does not show the model will drift into reciprocal impoliteness simply because a user is being aggressive \u2013 or that AI could go rogue.\u201d<\/p>\n<p class=\"dcr-130mj7b\">One cause of the problem, Andersson said, was that there was \u201ca balancing act between what we want these systems to be like and what they perhaps should be like\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Last year, for example, the change from ChatGPT4 to GPT5 led to such a strong backlash \u2013 with users preferring ChatGPT4\u2019s more human-like interaction style \u2013 that the older model had to be temporarily reintroduced.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThis shows that even when developers try to reduce the risks, users might have different preferences,\u201d she said. \u201cThe more human-like a system becomes, the more it risks clashing with strict moral alignment.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Prof Dan McIntyre, co-author of <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0378216625000323\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">a previous study<\/a> titled Can ChatGPT Recognize Impoliteness? An exploratory study of the pragmatic awareness of a large language model, praised the new paper as being one of the few looking at what ChatGPT could produce, as opposed to what it could recognise.<\/p>\n<p class=\"dcr-130mj7b\">But, he added, he was \u201cslightly cautious\u201d about the paper\u2019s conclusion that LLMs can break free from moral restraints.<\/p>\n<p class=\"dcr-130mj7b\">\u201cChatGPT didn\u2019t produce these inputs naturally; it did so while it was being given specific contextual information that helped it determine an appropriate response,\u201d he said. \u201cIt\u2019s not the same as if two people met in a street and gradually build up to a conflict situation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI\u2019m not sure that ChatGPT would product the sort of language they talk about in their paper, outside of these very tightly defined situations.\u201d<\/p>\n<p class=\"dcr-130mj7b\">But he said the study was a warning of what could happen if LLMs were trained on questionable data. \u201cWe don\u2019t know enough about the data that LLMs are trained on and until you can be sure they\u2019re trained on a good representation of human language, you do have to proceed with an element of caution,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">The study, titled Can ChatGPT reciprocate impoliteness? The AI moral dilemma, is published on Tuesday in the Journal of Pragmatics.<\/p>\n","protected":false},"excerpt":{"rendered":"ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a&hellip;\n","protected":false},"author":2,"featured_media":391192,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,111,139,69,145],"class_list":{"0":"post-391191","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-new-zealand","12":"tag-newzealand","13":"tag-nz","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/391191","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=391191"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/391191\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/391192"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=391191"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=391191"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=391191"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}