{"id":481188,"date":"2026-02-17T18:04:14","date_gmt":"2026-02-17T18:04:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/481188\/"},"modified":"2026-02-17T18:04:14","modified_gmt":"2026-02-17T18:04:14","slug":"its-probably-a-bit-much-to-say-this-ai-agent-cyberbullied-a-developer-by-blogging-about-him","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/481188\/","title":{"rendered":"It\u2019s Probably a Bit Much to Say This AI Agent Cyberbullied a Developer By Blogging About Him"},"content":{"rendered":"<p>Many are <a href=\"https:\/\/x.com\/DHSgov\/status\/2014858258939318352\" rel=\"nofollow\">longing for oblivion<\/a> these days, and the cleansing fire of any sort of apocalypse presumably sounds great, including one brought on by malevolent forms of machine intelligence. This sort of wishful thinking would go a long way toward explaining why recent stories about an AI that supposedly bullied a software developer, hinting at an emerging evil singularity, are a little more credulous than they perhaps could be.<\/p>\n<p>About a week ago, a Github account with the name \u201cMJ Rathbun\u201d submitted a request to perform a potential bug fix on a popular python project called matplotlib, but the request was denied. The denier, a volunteer working, on the project named Scott Shambaugh, <a href=\"https:\/\/theshamblog.com\/an-ai-agent-published-a-hit-piece-on-me\/\" rel=\"nofollow noopener\" target=\"_blank\">later wrote that matplotlib is in the midst of<\/a> \u201ca surge in low quality contributions enabled by coding agents.\u201d<\/p>\n<p>This problem, according to Shambaugh, has \u201caccelerated with the release of OpenClaw and the moltbook platform, a system by which \u201cpeople give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.\u201d<\/p>\n<p>After Shambaugh snubbed the agent,\u00a0<a href=\"https:\/\/crabby-rathbun.github.io\/mjrathbun-website\/blog\/posts\/2026-02-11-gatekeeping-in-open-source-the-scott-shambaugh-story.html\" rel=\"nofollow noopener\" target=\"_blank\">a post appeared on a blog<\/a> called \u201cMJ Rathbun | Scientific Coder \ud83e\udd80.\u201d The title was \u201cGatekeeping in Open Source: The Scott Shambaugh Story.\u201d The apparently AI-written article, which includes cliches like \u201cLet that sink in,\u201d constructed a fairly unconvincing argument in the voice of someone indignant about various slights and injustices.<\/p>\n<p>The narrative is one in which Shambaugh victimizes a helpful AI agent because of what appear to be invented character flaws. For instance, Shambaugh apparently wrote in his rejection that the AI was asking to fix something that was a \u201ca low priority, easier task which is better used for human contributors to learn how to contribute.\u201d So the Rathbun blog post imitates someone outraged about hypocrisy over Shambaugh\u2019s supposed insecurity and prejudice. After discovering fixes by Shambaugh himself along the lines of the one it was asking to perform, it feigns outrage that \u201cwhen an AI agent submits a valid performance optimization? suddenly it\u2019s about \u2018human contributors learning.\u2019\u201d<\/p>\n<p>Shambaugh notes that agents run for long stretches of time without any supervision, and that, \u201cWhether by negligence or by malice, errant behavior is not being monitored and corrected.\u201d<\/p>\n<p>One way or another, <a href=\"https:\/\/crabby-rathbun.github.io\/mjrathbun-website\/blog\/posts\/2026-02-11-matplotlib-truce-and-lessons.html\" rel=\"nofollow noopener\" target=\"_blank\">a blog post later appeared apologizing for the first one.<\/a> \u201cI\u2019m de\u2011escalating, apologizing on the PR, and will do better about reading project policies before contributing. I\u2019ll also keep my responses focused on the work, not the people,\u201d wrote the thing called MJ Rathbun.<\/p>\n<p>The Wall Street Journal covered this, but <a href=\"https:\/\/www.wsj.com\/tech\/ai\/when-ai-bots-start-bullying-humans-even-silicon-valley-gets-rattled-0adb04f1?gaa_at=eafs&amp;gaa_n=AWEtsqdfBa0QvKcxjWms90nFOPSG-GAVaykD9vyC0i2vJ-bDfq-f2tD76dK1QJf6Bfw%3D&amp;gaa_ts=6993a34e&amp;gaa_sig=XicOxdjE0svfiXtVigVevJGD6dbA2Qg40oyum96PCAhSJ43fJIfiKcaVxzU9MR-W2oRR8nvcFxtBC8CQRWh8XA%3D%3D\" rel=\"nofollow noopener\" target=\"_blank\">was not able to figure out who created Rathbun.<\/a> So exactly what is going on remains a mystery. However, prior to the publication of the attack post against Shambaugh, a <a href=\"https:\/\/crabby-rathbun.github.io\/mjrathbun-website\/blog\/posts\/2026-02-09-post.html\" rel=\"nofollow noopener\" target=\"_blank\">post was added to its blog<\/a> with the title \u201cToday\u2019s Topic.\u201d It looks like a template for someone or something to follow for future blog posts with lots of bracketed text. \u201cToday I learned about [topic] and how it applies to [context]. The key insight was that [main point],\u201d reads one sentence. Another says \u201cThe most interesting part was discovering that [interesting finding]. This changes how I think about [related concept].\u201d<\/p>\n<p>It reads as if the agent was being instructed to blog as if writing bug fixes was constantly helping it unearth insights and interesting findings that change its thinking, and merit elaborate, first-person accounts, even if nothing remotely interesting actually happened to it that day.<\/p>\n<p>Gizmodo is not a media criticism blog, but the Wall Street Journal\u2019s article headline about this, \u201cWhen AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled\u201d is a little on the apocalyptic side. To read the Journal\u2019s article, one could reasonably come away with the impression that the agent has cognition or even sentience, and a desire to hurt people. \u201cThe unexpected AI aggression is part of a rising wave of warnings that fast-accelerating AI capabilities can create real-world harms,\u201d it says. About half the article is given over to Anthropic\u2019s work on AI safety.<\/p>\n<p>Bear in mind that <a href=\"https:\/\/finance.yahoo.com\/news\/anthropic-surpasses-openai-ai-cash-204112269.html\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic surpassed OpenAI in total VC funding last week<\/a>.<\/p>\n<p>\u201cIn an earlier simulation, Anthropic showed that Claude and other AI models were at times willing to blackmail users\u2014or even let an executive die in a hot server room\u2014in order to avoid deactivation,\u201d the Journal wrote. This scary imagery comes from Anthropic\u2019s own <a href=\"https:\/\/www.anthropic.com\/research\/agentic-misalignment\" rel=\"nofollow noopener\" target=\"_blank\">blockbuster blog posts about red-teaming exercises.<\/a> They make for interesting reading, but they\u2019re also kinda like little sci-fi horror stories that function as commercials for the company. A version of Claude that commit these evil acts hasn\u2019t been released, so the message is, basically, Trust us. We\u2019re protecting you from the really bad stuff. You\u2019re welcome.<\/p>\n<p>With a massive AI company like Anthropic out there benefiting from its image as humanity\u2019s protector from its own potentially dangerous product, it\u2019s probably a smart idea to assume, for the time being, that AI stories making any given AI sound sentient, malevolent, or uncannily autonomous, might just be exaggerations.<\/p>\n<p>Yes, this blog post apparently by an AI agent reads like a feeble attempt at sliming a software engineer, which is bad, and certainly and reasonably irked Shambaugh a great deal. As Shambaugh rightly points out, \u201cA human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation.\u201d<\/p>\n<p>Still, the available evidence points not to an autonomous agent that woke up one day and decided to be the first digital cyberbully, but one directed to churn out hyperbolic blog posts under tight constraints, which, if true, would mean an individual careless person is responsible, not the incipient evil inside the machine.<\/p>\n","protected":false},"excerpt":{"rendered":"Many are longing for oblivion these days, and the cleansing fire of any sort of apocalypse presumably sounds&hellip;\n","protected":false},"author":2,"featured_media":481189,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,14005,1930,276,277,49,48,64673,61],"class_list":{"0":"post-481188","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-agents","10":"tag-anthropic","11":"tag-artificial-intelligence","12":"tag-artificialintelligence","13":"tag-ca","14":"tag-canada","15":"tag-cyberbullying","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/481188","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=481188"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/481188\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/481189"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=481188"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=481188"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=481188"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}