{"id":478121,"date":"2026-02-16T08:04:12","date_gmt":"2026-02-16T08:04:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/478121\/"},"modified":"2026-02-16T08:04:12","modified_gmt":"2026-02-16T08:04:12","slug":"ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/478121\/","title":{"rendered":"Ars Technica Pulls Article With AI Fabricated Quotes About AI Generated Article"},"content":{"rendered":"<p>The Conde Nast-owned tech publication Ars Technica has retracted an article that contained fabricated, AI-generated quotes, according to an <a href=\"https:\/\/arstechnica.com\/staff\/2026\/02\/editors-note-retraction-of-article-containing-fabricated-quotations\/?ref=404media.co\" rel=\"nofollow noopener\" target=\"_blank\">editor\u2019s note posted to its website<\/a>.\u00a0<\/p>\n<p>\u201cOn Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said,\u201d Ken Fisher, Ars Technica\u2019s editor-in-chief, said in his note. \u201cThat this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.\u201d<\/p>\n<p>Ironically, the Ars article itself was partially about another AI-generated article.\u00a0<\/p>\n<p>Last week, a Github user named MJ Rathbun began scouring Github for bugs in other projects it could fix. Scott Shambaugh, a volunteer maintainer for matplotlib, python\u2019s massively popular plotting library, declined a code change request from MJ Rathbun, which he identified as an AI agent. As Shambaugh wrote in his blog, like many open source projects, matplotlib has been dealing with a lot of AI-generated code contributions, but said \u201cthis has accelerated with the release of OpenClaw and the <a href=\"https:\/\/www.moltbook.com\/?ref=404media.co\" rel=\"nofollow noopener\" target=\"_blank\">moltbook<\/a> platform two weeks ago.\u201d\u00a0<\/p>\n<p><a href=\"https:\/\/www.404media.co\/silicon-valleys-favorite-new-ai-agent-has-serious-security-flaws\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenClaw<\/a> is a relatively easy way for people to deploy AI agents, which are essentially LLMs that are given instructions and are empowered to perform certain tasks, sometimes with access to live online platforms. These AI agents have gone viral in the last couple of weeks. Like much of generative AI, at this point it\u2019s hard to say exactly what kind of impact these AI agents will have in the long run, but for now they are also being overhyped and misrepresented. A prime example of this is <a href=\"https:\/\/www.404media.co\/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site\/\" rel=\"nofollow noopener\" target=\"_blank\">moltbook<\/a>, a social media platform for these AI agents, which <a href=\"https:\/\/www.404media.co\/podcast-the-latest-epstein-dump-is-a-disaster\/\" rel=\"nofollow noopener\" target=\"_blank\">as we discussed on the podcast two weeks ago<\/a>, contained a huge amount of clearly human activity pretending to be powerful or interesting AI behavior.\u00a0<\/p>\n<p>After Shambaugh rejected MJ Rathbun, the alleged AI agent published what Shambaugh called a \u201chit piece\u201d on <a href=\"https:\/\/crabby-rathbun.github.io\/mjrathbun-website\/blog\/posts\/2026-02-11-gatekeeping-in-open-source-the-scott-shambaugh-story.html?ref=404media.co\" rel=\"nofollow noopener\" target=\"_blank\">its website<\/a>.\u00a0<\/p>\n<p>\u201cI just had my first pull request to matplotlib closed. Not because it was wrong. Not because it broke anything. Not because the code was bad. It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren\u2019t welcome contributors.<\/p>\n<p>Let that sink in,\u201d the blog, which also accused Shambaugh of \u201cgatekeeping,\u201d said.\u00a0<\/p>\n<p>I saw Shambaugh\u2019s blog on Friday, and reached out both to him and an email address that appears to be associated with the MJ Rathbun Github account, but did not hear back. Like many of the stories coming out of the current frenzy around AI agents, it sounded extraordinary, but given the information that was available online, there\u2019s no way of knowing if MJ Rathbun is actually an AI agent acting autonomously, if it actually wrote a \u201chit piece,\u201d or if it\u2019s just a human pretending to be an AI.\u00a0<\/p>\n<p>On Friday afternoon, Ars Technica published a story with the headline \u201c<a href=\"https:\/\/web.archive.org\/web\/20260213194851\/https:\/\/arstechnica.com\/ai\/2026\/02\/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name\/\" rel=\"nofollow noopener\" target=\"_blank\">After a routine code rejection, an AI agent published a hit piece on someone by name.<\/a>\u201d The article cites Shambaugh\u2019s personal blog, but features quotes from Shambaugh that he didn\u2019t say or write but are attributed to his blog.\u00a0<\/p>\n<p>For example, the article quotes Shambaugh as saying \u201cAs autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace. Communities built on trust and volunteer effort will need tools and norms to address that reality.\u201d But that sentence doesn\u2019t appear in his blog. Shambaugh updated his blog to say he did not talk to Ars Technica and did not say or write the quotes in the articles.\u00a0<\/p>\n<p>After this article was first published, Benj Edwards, one of the authors of the Ars Technica article, <a href=\"https:\/\/bsky.app\/profile\/benjedwards.com\/post\/3mewgow6ch22p?ref=404media.co\" rel=\"nofollow noopener\" target=\"_blank\">explained on Bluesky<\/a> that he was responsible for the AI-generated quotes. He said he was sick that day and rushing to finish his work, and accidentally used a Chat-GPT paraphrased version of Shambaugh\u2019s blog rather than a direct quote.\u00a0<\/p>\n<p>\u201cThe text of the article was human-written by us, and this incident was isolated and is not representative of Ars Technica\u2019s editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that,\u201d he said.\u00a0<\/p>\n<p>The Ars Technica article, which had two bylines, was pulled entirely later that Friday. When I checked the link a few hours ago, it pointed to a 404 page. I reached out to Ars Technica for comment around noon today, and was directed to Fisher\u2019s editor\u2019s note, which was published after 1pm.\u00a0<\/p>\n<p>\u201cArs Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here,\u201d Fisher wrote. \u201cWe regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.\u201d<\/p>\n<p>Kyle Orland, the other author of the Ars Technica article, shared the editor\u2019s note on Bluesky and said \u201cI always have and always will abide by that rule to the best of my knowledge at the time a story is published.\u201d<\/p>\n<p>Update: This article was updated with a statement from Benj Edwards.<\/p>\n<p>About the author<\/p>\n<p>Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co\n<\/p>\n<p>        <a href=\"https:\/\/www.404media.co\/author\/emanuel-maiberg\/\" title=\"Emanuel Maiberg\" rel=\"nofollow noopener\" target=\"_blank\">More from Emanuel Maiberg<\/a><\/p>\n<p>        <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/07\/headshot-1.jpg\" alt=\"Emanuel Maiberg\"\/>  <\/p>\n","protected":false},"excerpt":{"rendered":"The Conde Nast-owned tech publication Ars Technica has retracted an article that contained fabricated, AI-generated quotes, according to&hellip;\n","protected":false},"author":2,"featured_media":478122,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-478121","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/478121","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=478121"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/478121\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/478122"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=478121"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=478121"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=478121"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}