{"id":477060,"date":"2026-02-15T19:32:08","date_gmt":"2026-02-15T19:32:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/477060\/"},"modified":"2026-02-15T19:32:08","modified_gmt":"2026-02-15T19:32:08","slug":"swarms-of-ai-bots-are-threatening-democracy","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/477060\/","title":{"rendered":"Swarms of AI bots are threatening democracy"},"content":{"rendered":"<p>In mid-2023, around the time Elon Musk rebranded Twitter as X but before he discontinued free academic access to the platform\u2019s data, my colleagues <a href=\"https:\/\/scholar.google.com\/citations?hl=en&amp;user=f_kGJwkAAAAJ&amp;view_op=list_works&amp;sortby=pubdate\" rel=\"nofollow noopener\" target=\"_blank\">and I<\/a> looked for signs of <a href=\"https:\/\/doi.org\/10.1145\/2818717\" rel=\"nofollow noopener\" target=\"_blank\">social bot<\/a> accounts posting content generated by artificial intelligence. Social bots are AI software that produce content and interact with people on social media. We uncovered a network of over a thousand bots involved in crypto scams. We dubbed this the <a href=\"https:\/\/doi.org\/10.51685\/jqd.2024.icwsm.7\" rel=\"nofollow noopener\" target=\"_blank\">\u201cfox8\u201d botnet<\/a> after one of the fake news websites it was designed to amplify.<\/p>\n<p>We were able to identify these accounts because the coders were a bit sloppy: They did not catch occasional posts with self-revealing text generated by ChatGPT, such as when the AI model refused to comply with prompts that violated its terms. The most common self-revealing response was \u201cI\u2019m sorry, but I cannot comply with this request as it violates OpenAI\u2019s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.\u201d<\/p>\n<p>We believe fox8 was only the tip of the iceberg because better coders can filter out self-revealing posts or use open-source AI models fine-tuned to remove ethical guardrails.<\/p>\n<p>The fox8 bots created fake engagement with each other and with human accounts through realistic back-and-forth discussions and retweets. In this way, they tricked X\u2019s recommendation algorithm into amplifying exposure to their posts and accumulated significant numbers of followers and influence.<\/p>\n<p>Such a level of coordination among inauthentic online agents was unprecedented \u2013 AI models had been weaponized to give rise to a new generation of social agents, much more sophisticated than <a href=\"https:\/\/theconversation.com\/how-many-bots-are-on-twitter-the-question-is-difficult-to-answer-and-misses-the-point-183425\" rel=\"nofollow noopener\" target=\"_blank\">earlier social bots<\/a>. Machine-learning tools to detect social bots, like our own <a href=\"https:\/\/botometer.osome.iu.edu\/\" rel=\"nofollow noopener\" target=\"_blank\">Botometer<\/a>, were unable to discriminate between these AI agents and human accounts in the wild. Even AI models trained to detect AI-generated content failed.<\/p>\n<p>Bots in the era of generative AI<\/p>\n<p>Fast-forward a few years: Today, people and organizations with malicious intent have access to more powerful AI language models \u2013 including open-source ones \u2013 while social media platforms have relaxed or eliminated moderation efforts. They even provide financial incentives for engaging content, irrespective of whether it\u2019s real or AI-generated. This is a perfect storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-controlled bot swarm could create the false impression of widespread, bipartisan opposition to a political candidate.<\/p>\n<p>The current U.S. administration has <a href=\"https:\/\/www.nytimes.com\/2025\/02\/20\/business\/trump-foreign-influence-election-interference.html\" rel=\"nofollow noopener\" target=\"_blank\">dismantled<\/a> federal programs that <a href=\"https:\/\/www.techpolicy.press\/the-downfall-of-the-global-engagement-center-and-disappearing-guardrails-against-disinformation\/\" rel=\"nofollow noopener\" target=\"_blank\">combat<\/a> such hostile campaigns and <a href=\"https:\/\/www.nytimes.com\/2025\/05\/15\/business\/trump-online-misinformation-grants.html\" rel=\"nofollow noopener\" target=\"_blank\">defunded<\/a> <a href=\"https:\/\/www.techpolicy.press\/fantasy-becomes-reality-as-trump-takes-revenge-on-disinformation-researchers\/\" rel=\"nofollow noopener\" target=\"_blank\">research<\/a> efforts to study them. Researchers <a href=\"https:\/\/independenttechresearch.org\/letter-twitters-new-api-plans-will-devastate-public-interest-research\/\" rel=\"nofollow noopener\" target=\"_blank\">no longer have access<\/a> to the platform data that would make it possible to detect and monitor these kinds of online manipulation.<\/p>\n<p>I am part of an interdisciplinary team of computer science, AI, cybersecurity, psychology, social science, journalism and policy researchers who have sounded the alarm about the <a href=\"http:\/\/doi.org\/10.1126\/science.adz1697\" rel=\"nofollow noopener\" target=\"_blank\">threat of malicious AI swarms<\/a>. We believe that current AI technology allows organizations with malicious intent to deploy large numbers of autonomous, adaptive, coordinated agents to multiple social media platforms. These agents enable influence operations that are far more scalable, sophisticated and adaptive than simple scripted misinformation campaigns.<\/p>\n<p>Rather than generating identical posts or obvious spam, AI agents can generate varied, <a href=\"http:\/\/doi.org\/10.1038\/s42256-023-00690-w\" rel=\"nofollow noopener\" target=\"_blank\">credible content at a large scale<\/a>. The swarms can send people messages tailored to their individual preferences and to the context of their online conversations. The swarms can tailor tone, style and content to respond dynamically to human interaction and platform signals such as numbers of likes or views.<\/p>\n<p>Synthetic consensus<\/p>\n<p>In a study my colleagues and I conducted last year, we used a social media model to <a href=\"https:\/\/doi.org\/10.1093\/pnasnexus\/pgae258\" rel=\"nofollow noopener\" target=\"_blank\">simulate swarms of inauthentic social media accounts<\/a> using different tactics to influence a target online community. One tactic was by far the most effective: infiltration. Once an online group is infiltrated, malicious AI swarms can create the illusion of broad public agreement around the narratives they are programmed to promote. This exploits a psychological phenomenon known as <a href=\"https:\/\/thedecisionlab.com\/reference-guide\/psychology\/social-proof\" rel=\"nofollow noopener\" target=\"_blank\">social proof<\/a>: Humans are naturally inclined to believe something if they perceive that \u201ceveryone is saying it.\u201d<\/p>\n<p><a href=\"https:\/\/images.theconversation.com\/files\/717691\/original\/file-20260211-112-6vwuxt.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" alt=\"A diagram showing clusters of gray and yellow dots with lines connecting many of them.\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/file-20260211-112-6vwuxt.jpg\"  \/><\/a>This diagram shows the influence network of an AI swarm on Twitter (now X) in 2023. The yellow dots represent a swarm of social bots controlled by an AI model. Gray dots represent legitimate accounts who follow the AI agents.<br \/>Filippo Menczer and Kai-Cheng Yang, <a class=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/\" rel=\"nofollow noopener\" target=\"_blank\">CC BY-NC-ND<\/a><\/p>\n<p>Such social media <a href=\"https:\/\/doi.org\/10.1145\/1963192.1963301\" rel=\"nofollow noopener\" target=\"_blank\">astroturf tactics<\/a> have been around for many years, but malicious AI swarms can effectively create believable interactions with targeted human users at a large scale, and get those users to follow the inauthentic accounts. For example, agents can talk about the latest game to a sports fan and about current events to a news junkie. They can generate language that resonates with the interests and opinions of their targets.<\/p>\n<p>Even if individual claims are debunked, the persistent chorus of independent-sounding voices can make radical ideas seem mainstream and amplify negative feelings toward \u201cothers.\u201d Manufactured synthetic consensus is a very real threat to the <a href=\"https:\/\/documents.worldbank.org\/en\/publication\/documents-reports\/documentdetail\/161991468155123204\" rel=\"nofollow noopener\" target=\"_blank\">public sphere<\/a>, the mechanisms democratic societies use to form shared beliefs, make decisions and trust public discourse. If citizens cannot reliably distinguish between genuine public opinion and algorithmically generated simulation of unanimity, democratic decision-making could be severely compromised.<\/p>\n<p>Mitigating the risks<\/p>\n<p>Unfortunately, there is not a single fix. Regulation granting researchers access to platform data would be a first step. Understanding how swarms behave collectively would be essential to anticipate risks. <a href=\"https:\/\/theconversation.com\/how-foreign-operations-are-manipulating-social-media-to-influence-your-views-240089\" rel=\"nofollow noopener\" target=\"_blank\">Detecting coordinated behavior<\/a> is a key challenge. Unlike simple copy-and-paste bots, malicious swarms produce varied output that resembles normal human interaction, making detection much more difficult.<\/p>\n<p>In our lab, we design methods to detect <a href=\"http:\/\/doi.org\/10.1609\/icwsm.v15i1.18075\" rel=\"nofollow noopener\" target=\"_blank\">patterns of coordinated behavior<\/a> that deviate from normal human interaction. Even if agents look different from each other, their underlying objectives often reveal patterns in timing, network movement and narrative trajectory that are unlikely to occur naturally.<\/p>\n<p>Social media platforms could use such methods. I believe that AI and social media platforms should also <a href=\"https:\/\/indicator.media\/p\/tech-platforms-fail-to-label-ai-content-c2pa-metadata\" rel=\"nofollow noopener\" target=\"_blank\">more aggressively<\/a> adopt standards to apply watermarks to AI-generated content and <a href=\"https:\/\/www.newsguardtech.com\/special-reports\/top-ai-chatbots-dont-recognize-ai-generated-videos\/\" rel=\"nofollow noopener\" target=\"_blank\">recognize and label such content<\/a>. Finally, restricting the monetization of inauthentic engagement would reduce the financial incentives for influence operations and other malicious groups to use synthetic consensus.<\/p>\n<p>The threat is real<\/p>\n<p>While these measures might mitigate the systemic risks of malicious AI swarms before they become entrenched in political and social systems worldwide, the current political landscape in the U.S. seems to be moving in the opposite direction. The Trump administration has <a href=\"https:\/\/theconversation.com\/whats-at-stake-in-trumps-executive-order-aiming-to-curb-state-level-ai-regulation-266668\" rel=\"nofollow noopener\" target=\"_blank\">aimed to reduce AI and social media regulation<\/a> and is instead favoring rapid deployment of AI models over safety.<\/p>\n<p>The threat of malicious AI swarms is no longer theoretical: Our evidence suggests these tactics are already being deployed. I believe that policymakers and technologists should increase the cost, risk and visibility of such manipulation.\ufeff<\/p>\n<p>\u00a0<\/p>\n<p><a href=\"https:\/\/theconversation.com\/profiles\/filippo-menczer-317794\" rel=\"nofollow noopener\" target=\"_blank\">Filippo Menczer<\/a>, Professor of Informatics and Computer Science, <a href=\"https:\/\/theconversation.com\/institutions\/indiana-university-1368\" rel=\"nofollow noopener\" target=\"_blank\">Indiana University<\/a><\/p>\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\" rel=\"nofollow noopener\" target=\"_blank\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/swarms-of-ai-bots-can-sway-peoples-beliefs-threatening-democracy-274778\" rel=\"nofollow noopener\" target=\"_blank\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"In mid-2023, around the time Elon Musk rebranded Twitter as X but before he discontinued free academic access&hellip;\n","protected":false},"author":2,"featured_media":477061,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-477060","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/477060","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=477060"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/477060\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/477061"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=477060"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=477060"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=477060"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}