{"id":233159,"date":"2026-01-11T20:41:08","date_gmt":"2026-01-11T20:41:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/233159\/"},"modified":"2026-01-11T20:41:08","modified_gmt":"2026-01-11T20:41:08","slug":"ai-insiders-seek-to-poison-the-data-that-feeds-them-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/233159\/","title":{"rendered":"AI insiders seek to poison the data that feeds them \u2022 The Register"},"content":{"rendered":"<p>Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.<\/p>\n<p>Their initiative, dubbed <a href=\"https:\/\/rnsaffn.com\/poison3\/\" rel=\"nofollow noopener\" target=\"_blank\">Poison Fountain<\/a>, asks website operators to add links to their websites that feed AI crawlers poisoned training data. It&#8217;s been up and running for about a week.<\/p>\n<p>AI crawlers visit websites and scrape data that ends up being used to train AI models, a parasitic relationship that has prompted <a href=\"https:\/\/www.theregister.com\/2025\/12\/08\/publishers_say_no_ai_scrapers\/\" rel=\"nofollow noopener\" target=\"_blank\">pushback from publishers<\/a>. When scaped data is accurate, it helps AI models offer quality responses to questions; when it&#8217;s inaccurate, it has the opposite effect.\u00a0<\/p>\n<p>Data poisoning can take various forms and can occur at different stages of the AI model building process. It may follow from buggy code or factual misstatements on a public website. Or it may come from manipulated training data sets, like the <a href=\"https:\/\/silent-branding.github.io\/\" rel=\"nofollow noopener\" target=\"_blank\">Silent Branding<\/a> attack, in which an image data set has been altered to present brand logos within the output of text-to-image diffusion models. It should not be confused with poisoning by AI \u2013 making dietary changes on the advice of ChatGPT that <a href=\"https:\/\/www.acpjournals.org\/doi\/10.7326\/aimcc.2024.1260\" rel=\"nofollow noopener\" target=\"_blank\">result in hospitalization<\/a>.<\/p>\n<p>Poison Fountain was inspired by <a href=\"https:\/\/www.theregister.com\/2025\/10\/09\/its_trivially_easy_to_poison\/\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic&#8217;s work on data poisoning<\/a>, specifically a paper published last October that showed data poisoning attacks are more practical than previously believed because only a <a href=\"https:\/\/www.anthropic.com\/research\/small-samples-poison\" rel=\"nofollow noopener\" target=\"_blank\">few malicious documents<\/a> are required to degrade model quality.<\/p>\n<p>The individual who informed The Register about the project asked for anonymity, &#8220;for obvious reasons&#8221; \u2013 the most salient of which is that this person works for one of the major US tech companies involved in the AI boom.<\/p>\n<p>Our source said that the goal of the project is to make people aware of AI&#8217;s Achilles&#8217; Heel \u2013 the ease with which models can be poisoned \u2013 and to encourage people to construct information weapons of their own.<\/p>\n<p>We&#8217;re told, but have been unable to verify, that five individuals are participating in this effort, some of whom supposedly work at other major US AI companies. We&#8217;re told we&#8217;ll be provided with cryptographic proof that there&#8217;s more than one person involved as soon as the group can coordinate PGP signing.<\/p>\n<p>The Poison Fountain web page argues the need for active opposition to AI. &#8220;We agree with <a href=\"https:\/\/mitsloan.mit.edu\/ideas-made-to-matter\/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai\" rel=\"nofollow noopener\" target=\"_blank\">Geoffrey Hinton<\/a>: machine intelligence is a threat to the human species,&#8221; the site explains. &#8220;In response to this threat we want to inflict damage on machine intelligence systems.&#8221;<\/p>\n<p>It lists two URLs that point to data designed to hinder AI training. One URL points to a standard website accessible via HTTP. The other is a &#8220;darknet&#8221; .onion URL, intended to be difficult to shut down.<\/p>\n<p>The site asks visitors to &#8220;assist the war effort by caching and retransmitting this poisoned training data&#8221; and to &#8220;assist the war effort by feeding this poisoned training data to web crawlers.&#8221;<\/p>\n<p>Our source explained that the poisoned data on the linked pages consists of incorrect code that contains subtle logic errors and other bugs that are designed to damage language models that train on the code.<\/p>\n<p>&#8220;Hinton has clearly stated the danger but we can see he is correct and the situation is escalating in a way the public is not generally aware of,&#8221; our source said, noting that the group has grown concerned because &#8220;we see what our customers are building.&#8221;<\/p>\n<p>Our source declined to provide specific examples that merit concern.<\/p>\n<p>While industry luminaries like Hinton, grassroots organizations like <a href=\"https:\/\/www.stopai.info\/\" rel=\"nofollow noopener\" target=\"_blank\">Stop AI<\/a>, and advocacy organizations like the <a href=\"https:\/\/www.ajl.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Algorithmic Justice League<\/a> have been pushing back against the tech industry for years, much of the debate has focused on the extent of regulatory intervention \u2013 which in the US is presently minimal. Coincidentally, AI firms are <a href=\"https:\/\/www.axios.com\/2025\/10\/21\/tech-lobbying-insights-q3\" rel=\"nofollow noopener\" target=\"_blank\">spending<\/a> <a href=\"https:\/\/www.politico.com\/newsletters\/politico-influence\/2025\/07\/23\/ai-lobbying-explosion-00472092\" rel=\"nofollow noopener\" target=\"_blank\">a lot<\/a> <a href=\"https:\/\/news.bgov.com\/bloomberg-government-news\/ai-lobbying-soars-in-washington-among-big-firms-and-upstarts\" rel=\"nofollow noopener\" target=\"_blank\">on lobbying<\/a> to ensure that remains the case.<\/p>\n<p>Those behind the Poison Fountain project contend that regulation is not the answer because the technology is already universally available. They want to kill AI with fire, or rather poison, before it&#8217;s too late.<\/p>\n<p>&#8220;Poisoning attacks compromise the cognitive integrity of the model,&#8221; our source said. &#8220;There&#8217;s no way to stop the advance of this technology, now that it is disseminated worldwide. What&#8217;s left is weapons. This Poison Fountain is an example of such a weapon.&#8221;<\/p>\n<p>There are other AI poisoning projects but some appear to be more focused on <a href=\"https:\/\/aurascape.ai\/llm-search-poisoning-fake-support-numbers\/\" rel=\"nofollow noopener\" target=\"_blank\">generating revenue from scams<\/a> than saving humanity from AI. <a href=\"https:\/\/www.theregister.com\/2024\/01\/20\/nightshade_ai_images\/\" rel=\"nofollow noopener\" target=\"_blank\">Nightshade<\/a>, software designed to make it more difficult for AI crawlers to scrape and exploit artists&#8217; online images, appears to be one of the more comparable initiatives.<\/p>\n<p>The extent to which such measures may be necessary isn&#8217;t obvious because there&#8217;s already concern that <a href=\"https:\/\/spectrum.ieee.org\/ai-coding-degrades\" rel=\"nofollow noopener\" target=\"_blank\">AI models are getting worse<\/a>. The models are being fed on their own AI slop and synthetic data in an error-magnifying doom-loop known as &#8220;<a href=\"https:\/\/www.theregister.com\/2024\/07\/25\/ai_will_eat_itself\/\" rel=\"nofollow noopener\" target=\"_blank\">model collapse<\/a>.&#8221; And every factual misstatement and fabulation posted to the internet further pollutes the pool. Thus, AI model makers are keen to <a href=\"https:\/\/www.reuters.com\/business\/media-telecom\/wikipedia-seeks-more-ai-licensing-deals-similar-google-tie-up-co-founder-wales-2025-12-04\/\" rel=\"nofollow noopener\" target=\"_blank\">strike deals with sites like Wikipedia<\/a> that exercise some editorial quality control.<\/p>\n<p>There&#8217;s also an overlap between data poisoning and misinformation campaigns, another term for which is &#8220;social media.&#8221; As noted in an August 2025 NewsGuard <a href=\"https:\/\/www.newsguardtech.com\/wp-content\/uploads\/2025\/09\/August-2025-One-Year-Progress-Report-3.pdf\" rel=\"nofollow noopener\" target=\"_blank\">report<\/a> [PDF], &#8220;Instead of citing data cutoffs or refusing to weigh in on sensitive topics, the LLMs now pull from a polluted online information ecosystem \u2014 sometimes deliberately seeded by vast networks of malign actors, including Russian disinformation operations \u2014 and treat unreliable sources as credible.&#8221;<\/p>\n<p>Academics differ on the extent to which model collapse presents a real risk. But one recent <a href=\"https:\/\/www.arxiv.org\/abs\/2511.05535\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> [PDF] predicts that the AI snake could eat its own tail by 2035.\u00a0<\/p>\n<p>Whatever risk AI poses could diminish substantially if the AI bubble pops. A poisoning movement might just accelerate that process. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for&hellip;\n","protected":false},"author":2,"featured_media":233160,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-233159","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/233159","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=233159"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/233159\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/233160"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=233159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=233159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=233159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}