{"id":430723,"date":"2026-01-24T18:02:23","date_gmt":"2026-01-24T18:02:23","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/430723\/"},"modified":"2026-01-24T18:02:23","modified_gmt":"2026-01-24T18:02:23","slug":"ai-is-quietly-poisoning-itself-and-pushing-models-toward-collapse-but-theres-a-cure","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/430723\/","title":{"rendered":"AI is quietly poisoning itself and pushing models toward collapse &#8211; but there&#8217;s a cure"},"content":{"rendered":"<p> <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/gettyimages-1149769748.jpg\" alt=\"Computer trash symbol on dynamic digital background. Glowing digital data delete icon abstract 3d illustration. Bright recycling sign.\" width=\"1280\" height=\"716.8885047536733\" fetchpriority=\"low\"\/>   Arkadiusz Wargu\u0142a via iStock \/ Getty Images Plus<\/p>\n<p>Follow ZDNET:\u00a0<a href=\"https:\/\/cc.zdnet.com\/v1\/otc\/00hQi47eqnEWQ6T9d4QLBUc?element=BODY&amp;element_label=Add+us+as+a+preferred+source&amp;module=LINK&amp;object_type=text-link&amp;object_uuid=1b45aed0-7631-4cdf-9e17-ffc59df43fc1&amp;position=1&amp;template=article&amp;track_code=__COM_CLICK_ID__&amp;url=https%3A%2F%2Fwww.google.com%2Fpreferences%2Fsource%3Fq%3Dzdnet.com&amp;view_instance_uuid=3091965c-9feb-4970-b244-c25825863d3c\" rel=\"noopener nofollow sponsored\" target=\"_blank\">Add us as a preferred source<\/a>\u00a0on Google.<\/p>\n<p>ZDNET&#8217;s key takeawaysWhen AI LLMs &#8220;learn&#8221; from other AIs, the result is GIGO.You will need to verify your data before you can trust your AI answers.This approach requires a dedicated effort across your company.<\/p>\n<p>According to tech analyst Gartner, <a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2026-01-21-gartner-predicts-by-2028-50-percent-of-organizations-will-adopt-zero-trust-data-governance-as-unverified-ai-generated-data-grows\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">AI data is rapidly becoming a classic Garbage In\/Garbage Out (GIGO) problem<\/a> for users. That&#8217;s because organizations&#8217; <a href=\"https:\/\/www.zdnet.com\/article\/ais-not-reasoning-at-all-how-this-team-debunked-the-industry-hype\/\" rel=\"nofollow noopener\" target=\"_blank\">AI systems and large language models<\/a> (LLMs) are flooded with unverified, AI\u2011generated content that cannot be trusted.\u00a0<\/p>\n<p>Model collapse<\/p>\n<p>You know this better as <a href=\"https:\/\/www.zdnet.com\/article\/ai-slop-brainrot-youtube-shorts\/\" rel=\"nofollow noopener\" target=\"_blank\">AI slop<\/a>. While annoying to you and me, it&#8217;s deadly to AI because it poisons the LLMs with fake data. The result is what&#8217;s called in AI circles &#8220;<a href=\"https:\/\/www.theregister.com\/2025\/05\/27\/opinion_column_ai_model_collapse\/\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">Model Collapse<\/a>.&#8221; AI company <a href=\"https:\/\/www.aquant.ai\/\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">Aquant<\/a> defined this trend: &#8220;In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality.&#8221;\u00a0<\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/4-new-roles-will-lead-agentic-ai-revolution\/\" rel=\"nofollow noopener\" target=\"_blank\">4 new roles will lead the agentic AI revolution &#8211; here&#8217;s what they require<\/a><\/p>\n<p>However, I think that definition is much too kind. It&#8217;s not a case of &#8220;can&#8221; &#8212; with bad data, AI results &#8220;will&#8221; drift away from reality. \u00a0<\/p>\n<p>Zero trust<\/p>\n<p>This issue is already apparent. Gartner predicted that 50% of organizations will have a zero\u2011trust posture for <a href=\"https:\/\/www.zdnet.com\/article\/5-ways-to-feed-your-ai-the-right-business-data-and-get-gold-dust-not-garbage-back\/\" rel=\"nofollow noopener\" target=\"_blank\">data governance<\/a>\u00a0by 2028. These enterprises will have no choice, because unverified AI\u2011generated data is proliferating across corporate systems and public sources.\u00a0<\/p>\n<p>The analyst argued that enterprises can no longer assume data is human\u2011generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes.<\/p>\n<p>Ever try to authenticate and verify data from AI? It&#8217;s not easy. It can be done, but <a href=\"https:\/\/www.thedeepview.com\/articles\/ibm-warns-ai-spend-fails-without-ai-literacy\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">AI literacy<\/a> isn&#8217;t a common skill.\u00a0<\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/got-ai-skills-you-can-earn-43-more-in-your-next-job-and-not-just-for-tech-work\/\" rel=\"nofollow noopener\" target=\"_blank\">Got AI skills? You can earn 43% more in your next job &#8211; and not just for tech work<\/a><\/p>\n<p>As IBM distinguished engineer Phaedra Boinodiris told me recently: &#8220;Just having the data is not enough. Understanding the context and the relationships of the data is key. This is why you need to have an interdisciplinary approach to who gets to decide what data is correct. Does it represent all the different communities that we need to serve? Do we understand the relationships of how this data was gathered?&#8221;\u00a0<\/p>\n<p>Making matters worse, GIGO now operates at AI scale. This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that&#8217;s right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow.\u00a0<\/p>\n<p>To counter this concern, Gartner said businesses should adopt\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/zero-trust-and-cybersecurity-heres-what-it-means-and-why-it-matters\/\" rel=\"nofollow noopener\" target=\"_blank\">zero\u2011trust<\/a>\u00a0thinking. Originally developed for networks, zero-trust is now being applied to data governance in response to AI risks.\u00a0<\/p>\n<p>Also:\u00a0<a href=\"https:\/\/www.zdnet.com\/article\/deploying-ai-agents-7-lessons-from-trenches-experts\/\" rel=\"nofollow noopener\" target=\"_blank\">Deploying AI agents is not your typical software launch &#8211; 7 lessons from the trenches<\/a><\/p>\n<p>Stronger mechanisms<\/p>\n<p>Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI\u2011generated content, and continuously manage metadata so they know what their systems are actually consuming. The analyst proposed the following steps:<\/p>\n<p>Appoint an AI governance leader: Establish a dedicated role responsible for AI governance, including zero-trust policies, AI risk management, and compliance operations. However, this individual can&#8217;t do the work by themselves. They must work closely with data and analytics teams to ensure <a href=\"https:\/\/www.gartner.com\/en\/information-technology\/topics\/ai-readiness\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">AI-ready<\/a> data and systems can handle AI-generated content.Foster cross-functional collaboration: Cross-functional teams must include security, data, analytics, and other relevant stakeholders to conduct comprehensive <a href=\"https:\/\/www.gartner.com\/en\/webinar\/765395\/1736478-leverage-the-gartner-framework-to-manage-ai-governance-trust-risk-and-security\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">data risk assessments<\/a>. I&#8217;d add representatives of any department in your company that uses AI. Only the users can tell you what they really need from AI. This crew&#8217;s job is to identify and address AI-generated business risks.Leverage existing governance policies: Build on current data and analytics\u00a0<a href=\"https:\/\/www.gartner.com\/en\/articles\/ai-ethics-governance-and-compliance\" target=\"_blank\" rel=\"noopener nofollow\" class=\"c-regularLink\">governance frameworks<\/a> and update security, metadata management, and ethics-related policies to address AI-generated data risks. You&#8217;ll have more than enough work without reinventing the wheel.\u00a0Adopt active metadata practices: Enable real-time alerts when data is stale or requires recertification. I&#8217;ve already seen many examples where old data is wrong. For example, I asked several AI chatbots the other day what the default scheduler was in Linux today. The common answer: the Completely Fair Scheduler (CFS). Yes, CFS is still in use, but starting with 2023&#8217;s 6.6 kernel, it was retired in favor of the Earliest Eligible Virtual Deadline First (EEVDF) scheduler. My point is that anyone other than someone like me, who knows Linux pretty well, would never get the right answer from AI.\u00a0<\/p>\n<p>So, will AI still be useful in 2028? Sure, but ensuring it&#8217;s useful and not heading down a primrose path to a bad answer will require a lot of good, old-fashioned people work. However, this role will at least be <a href=\"https:\/\/www.zdnet.com\/article\/reinventing-your-career-in-the-age-of-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">a new job generated by the so-called AI revolution<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"Arkadiusz Wargu\u0142a via iStock \/ Getty Images Plus Follow ZDNET:\u00a0Add us as a preferred source\u00a0on Google. ZDNET&#8217;s key&hellip;\n","protected":false},"author":2,"featured_media":430724,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-430723","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/430723","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=430723"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/430723\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/430724"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=430723"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=430723"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=430723"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}