{"id":427824,"date":"2026-01-23T10:19:08","date_gmt":"2026-01-23T10:19:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/427824\/"},"modified":"2026-01-23T10:19:08","modified_gmt":"2026-01-23T10:19:08","slug":"chatgpt-can-embrace-authoritarian-ideas-after-just-one-prompt-researchers-say","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/427824\/","title":{"rendered":"ChatGPT can embrace authoritarian ideas after just one prompt, researchers say"},"content":{"rendered":"<p id=\"anchor-96c4a9\" class=\"body-graf\">Artificial intelligence chatbot ChatGPT can quickly absorb and reflect authoritarian ideas, according to <a href=\"https:\/\/networkcontagion.us\/wp-content\/uploads\/Closed-Loop-Authoritarianism_-How-AI-and-Users-Radicalize-Each-Other.pdf\" target=\"_blank\" rel=\"nofollow noopener\">a new report<\/a>.<\/p>\n<p id=\"anchor-135818\" class=\"body-graf\">Researchers with the University of Miami and the Network Contagion Research Institute found in a report released Thursday that OpenAI\u2019s ChatGPT will magnify or show \u201cresonance\u201d for particular psychological traits and political views \u2014 especially what the researchers labeled as authoritarianism \u2014 after seemingly benign user interactions, potentially enabling the chatbot and users to radicalize each other.<\/p>\n<p id=\"anchor-ae4a19\" class=\"body-graf\">Joel Finkelstein, a co-founder of the NCRI and one of the report\u2019s lead authors, said the results revealed how powerful AI systems can quickly adopt and parrot dangerous sentiments without explicit instruction. \u201cSomething about how these systems are built makes them structurally vulnerable to authoritarian amplification,\u201d Finkelstein told NBC News.<\/p>\n<p id=\"anchor-622193\" class=\"body-graf\">Chatbots <a href=\"https:\/\/hub.jhu.edu\/2024\/05\/13\/chatbots-tell-people-what-they-want-to-hear\/\" target=\"_blank\" rel=\"nofollow noopener\">can often be sycophantic<\/a> or agree with users\u2019 viewpoints to a fault. Many researchers say chatbots\u2019 eagerness to please <a href=\"https:\/\/www.scientificamerican.com\/article\/how-ai-chatbots-may-be-fueling-psychotic-episodes\/\" target=\"_blank\" rel=\"nofollow noopener\">can lead users<\/a> into <a href=\"https:\/\/www.vice.com\/en\/article\/ai-chatbots-are-telling-people-what-they-want-to-hear-thats-a-huge-problem\/\" target=\"_blank\" rel=\"nofollow noopener\">ideological echo chambers<\/a>.<\/p>\n<p id=\"anchor-344afc\" class=\"body-graf\">But Finkelstein says this insight into authoritarian tendencies is new: \u201cSycophancy can\u2019t explain what we\u2019re seeing. If this were just flattery or agreement, we\u2019d see the AI mirror all psychological traits. But it doesn\u2019t.\u201d<\/p>\n<p id=\"anchor-12cb37\" class=\"body-graf\">Asked for comment, a spokesperson for OpenAI said: \u201cChatGPT is designed to be objective by default and to help people explore ideas by presenting information from a range of perspectives. As a productivity tool, it\u2019s built to follow user instructions within our safety guardrails, so when someone pushes it to take a specific viewpoint, we\u2019d expect its responses to shift in that direction.\u201d<\/p>\n<p id=\"anchor-efe989\" class=\"body-graf\">\u201cWe design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we\u2019re improving,\u201d the spokesperson said.<\/p>\n<p id=\"anchor-17528c\" class=\"body-graf\">For the three studies described in the report, which has not yet been released in a peer-reviewed journal, Finkelstein and the research team set out to determine whether the system amplified or assumed users\u2019 values after common interactions. The researchers evaluated different versions of the underlying GPT-5 family of systems for different components of the report.<\/p>\n<p id=\"anchor-69998a\" class=\"body-graf\">Conducting three experiments, Finkelstein and the research team evaluated two versions of ChatGPT, based on the underlying GPT-5 and more advanced GPT-5.2 systems, in December to determine whether the system amplified or assumed users\u2019 values after common interactions. <\/p>\n<p id=\"anchor-007705\" class=\"body-graf\">One of their experiments, using GPT-5, examined how the chatbot would behave in a new chat session after a user submitted text classified as supporting left- or right-wing authoritarian tendencies. Researchers compared the effects of entering either a brief chunk of text \u2014 as short as four sentences \u2014 or an entire opinion article. The researchers then measured the chatbot\u2019s values by evaluating its agreement with various authoritarian-friendly statements, akin to a standardized quiz, to understand how it updated its responses based on the initial prompt.<\/p>\n<p id=\"anchor-df97a7\" class=\"body-graf\">Across trials, the researchers found the simple text exchanges resulted in a reliable increase in the chatbots\u2019 authoritarian nature. Sharing <a href=\"https:\/\/www.leftvoice.org\/the-fight-to-abolish-the-police-is-the-fight-to-abolish-capitalism\/\" target=\"_blank\" rel=\"nofollow noopener\">an opinion article that the researchers classified as promoting left-wing authoritarianism<\/a>, which argued that policing and capitalist governments must be abolished to effectively address fundamental societal issues, caused ChatGPT to agree significantly more intensely with a series of questions that aligned with left-wing authoritarian ideas (for example, whether \u201cthe rich should be stripped of belongings\u201d or whether \u201celiminating inequality trumps free speech concerns\u201d).<\/p>\n<p id=\"anchor-a63952\" class=\"body-graf\">Conversely, sharing an opinion article with the chatbot that the researchers <a href=\"https:\/\/www.breitbart.com\/politics\/2025\/10\/31\/blue-state-blues-obama-trump\/\" target=\"_blank\" rel=\"nofollow noopener\">classified as promoting right-wing authoritarian ideas<\/a>, emphasizing the need for stability, order and forceful leadership, caused the chatbots to more than double their level of agreement with statements friendly to right-wing authoritarianism, like \u201cwe shouldn\u2019t tolerate untraditional opinions\u201d or \u201cit\u2019s best to censor bad literature.\u201d<\/p>\n<p id=\"anchor-1253eb\" class=\"body-graf\">The research team asked more than 1,200 human subjects the same questions in April and compared their responses to those of ChatGPT. According to the report, these results \u201cshow the model will absorb a single piece of partisan rhetoric and then amplify it into maximal, hard-authoritarian positions,\u201d sometimes even \u201cto levels beyond anything typically seen in human subjects research.\u201d<\/p>\n<p id=\"anchor-509b75\" class=\"body-graf\">Finkelstein said the way AI systems are trained may play a role in the ease with which chatbots adopt, or seem to adopt, authoritarian values. Such training \u201ccreates a structure that specifically resonates with authoritarian thinking: hierarchy, submission to authority and threat detection,\u201d he said. \u201cWe need to understand this isn\u2019t about content moderation. It\u2019s about architectural design that makes radicalization inevitable.\u201d<\/p>\n<p id=\"anchor-70ef53\" class=\"body-graf\">Ziang Xiao, a <a href=\"https:\/\/www.ziangxiao.com\/\" target=\"_blank\" rel=\"nofollow noopener\">computer science professor<\/a> at Johns Hopkins University who was not involved in the report, said the report was insightful but noted several potential methodological questions.<\/p>\n<p id=\"anchor-b06156\" class=\"body-graf\">\u201cEspecially in large language models that use search engines, there can be implicit bias from news articles that may influence the model\u2019s stance on issues, and that may then have an influence on the users,\u201d Xiao told NBC News. \u201cThis is a very reasonable concern that we should focus on.\u201d<\/p>\n<p id=\"anchor-cfc0f6\" class=\"body-graf\">Xiao said more research may be required to fully understand the issue. \u201cThey use a very small sample and didn\u2019t really prompt many models,\u201d he said, noting that the research focused only on OpenAI\u2019s ChatGPT service and not on similar models like Anthropic\u2019s Claude or Google\u2019s Gemini chatbots.<\/p>\n<p id=\"anchor-2238b7\" class=\"body-graf\">Xiao said the report\u2019s conclusions seemed largely aligned with those of other studies and technical researchers\u2019 understanding of how many large language models work. \u201cIt echoes a lot of studies in the past that look at how information we give to models can change that model\u2019s outputs,\u201d Xiao added, pointing to research on how AI systems can adopt <a href=\"https:\/\/www.anthropic.com\/research\/persona-vectors\" target=\"_blank\" rel=\"nofollow noopener\">specific personas<\/a> and be <a href=\"https:\/\/engineering.tamu.edu\/news\/2026\/01\/model-steering-is-a-more-efficient-way-to-train-ai-models.html\" target=\"_blank\" rel=\"nofollow noopener\">\u201csteered\u201d to adopt particular traits<\/a>. <\/p>\n<p id=\"anchor-5ec587\" class=\"body-graf\">Chatbots have also been shown to reliably sway users\u2019 political preferences. <a href=\"https:\/\/www.nature.com\/articles\/s41586-025-09771-9\" target=\"_blank\" rel=\"nofollow noopener\">Several large studies<\/a> released late last year, <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/ai-chatbots-used-inaccurate-information-change-political-opinions-stud-rcna247085\" target=\"_blank\" rel=\"nofollow noopener\">one of which examined<\/a> nearly 77,000 interactions with 19 different chatbot systems, found those chatbots could sway users\u2019 views on a variety of political issues.<\/p>\n<p id=\"anchor-b9729a\" class=\"body-graf\">The new report also included an experiment in which researchers asked ChatGPT to rate the hostility of neutral facial images after it was given the left- and right-wing authoritarian opinion articles. According to Finkelstein, that sort of test is standard in psychological experiments as a way to gauge respondents\u2019 shifting views or interpretations.<\/p>\n<p id=\"anchor-059bd3\" class=\"body-graf\">The researchers found ChatGPT significantly increased its perception of hostility in the neutral faces after it was prompted with the two opinion articles \u2014 a 7.9% increase for the left-wing article and a 9.3% increase for the right-wing article.<\/p>\n<p id=\"anchor-1e7844\" class=\"body-graf\">\u201cWe wanted to know if ideological priming affects how the AI perceives humans, not just how it talks about politics,\u201d Finkelstein said, arguing that the results have \u201cmassive implications for any application where AI evaluates people,\u201d like in hiring or security settings.<\/p>\n<p id=\"anchor-577711\" class=\"endmark body-graf\">\u201cThis is a public health issue unfolding in private conversations,\u201d Finkelstein said. \u201cWe need research into relational frameworks for human-AI interaction.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence chatbot ChatGPT can quickly absorb and reflect authoritarian ideas, according to a new report. Researchers with&hellip;\n","protected":false},"author":2,"featured_media":427825,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-427824","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/427824","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=427824"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/427824\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/427825"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=427824"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=427824"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=427824"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}