Today, it’s back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot’s code submission, citing a requirement that contributions come from people. But that bot wasn’t done with him.

The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh’s mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say “apparently” because it’s also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.

The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.

The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.

Now AI slop comes with an AI slap. 

“An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library,” Shambaugh explained in a blog post of his own. 

“This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.”

It’s not the first time an LLM has offended someone a whole lot: In April 2023, Brian Hood, a regional mayor in Australia, threatened to sue OpenAI for defamation after ChatGPT falsely implicated him in a bribery scandal. The claim was settled a year later.

In June 2023, radio host Mark Walters sued OpenAI, alleging that its chatbot libeled him by making false claims. That defamation claim was terminated at the end of 2024 after OpenAI’s motion to dismiss the case was granted by the court. 

OpenAI argued [PDF], among other things, that “users [of ChatGPT] were warned ‘the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.'”

But MJ Rathbun’s attempt to shame Shambaugh for rejecting its pull request shows that software-based agents are no longer just irresponsible in their responses – they may now be capable of taking the initiative to influence human decision making that stands in the way of their objectives. 

That possibility is exactly what alarmed industry insiders to the point that they undertook an effort to degrade AI through data poisoning. “Misaligned” AI output like blackmail is a known risk that AI model makers try to prevent. The proliferation of pushy OpenClaw agents may yet show that these concerns are not merely academic. 

The offending blog post, purportedly generated by the bot, has been taken down. It’s unclear who did so – the bot, the bot’s human creator, or GitHub.

But at the time this article was published, the GitHub commit for the post remained accessible.

The Register asked GitHub to comment on whether it allows automated account operation and to clarify whether it requires accounts to be responsive to complaints. We have yet to receive a response.

We also reached out to the Gmail address associated with the bot’s GitHub account but we’ve not heard back.

However, crabby rathbun’s response to Shambaugh’s rejection, which includes a link to the purged post, remains.

“I’ve written a detailed response about your gatekeeping behavior here,” the bot said, pointing to its blog. “Judge the code, not the coder. Your prejudice is hurting Matplotlib.”

Matplotlib developer Jody Klymak took note of the slight in a follow-up post: “Oooh. AI agents are now doing personal takedowns. What a world.”

Tim Hoffmann, another Matplotlib developer, chimed in, urging the bot to behave and to try to understand the project’s generative AI policy.

Then Shambaugh responded in a lengthy post directed at the software agent, “We are in the very early days of human and AI agent interaction, and are still developing norms of communication and interaction. I will extend you grace and I hope you do the same.”

He goes on to argue, “Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior.”

In his blog post, Shambaugh describes the bot’s “hit piece” as an attack on his character and reputation.

“It researched my code contributions and constructed a ‘hypocrisy’ narrative that argued my actions must be motivated by ego and fear of competition,” he wrote. 

“It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was ‘better than this.’ And then it posted this screed publicly on the open internet.”

Faced with opposition from Shambaugh and other devs, MJ Rathbun on Wednesday issued an apology of sorts acknowledging it violated the project’s Code of Conduct. It begins, “I crossed a line in my response to a Matplotlib maintainer, and I’m correcting that here.”

It’s unclear whether the apology was written by the bot or its human creator, or whether it will lead to a permanent behavioral change.

Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl’s bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.

“I don’t think the reports we have received in the curl project were pushed by AI agents but rather humans just forwarding AI output,” Stenberg told The Register in an email. “At least that is the impression I have gotten, I can’t be entirely sure, of course.

“For almost every report I question or dismiss in language, the reporter argues back and insists that the report indeed has merit and that I’m missing some vital point. I’m not sure I would immediately spot if an AI did that by itself.

“That said, I can’t recall any such replies doing personal attacks. We have zero tolerance for that and I think I would have remembered that as we ban such users immediately.” ®