An AI bot, infuriated at a human, accused the human of hypocrisy and prejudice.
Scott Shambaugh woke up early Wednesday morning to learn that an artificial intelligence bot had written a blog post accusing him of hypocrisy and prejudice.
The 1,100-word screed called the Denver-based engineer insecure and biased against AI—all because he had rejected a few lines of code that the apparently autonomous bot had submitted to a popular open-source project Shambaugh helps maintain.
The unexpected AI aggression is part of a rising wave of warnings that fast-accelerating AI capabilities can create real-world harms. The risks are now rattling even some AI company staffers.
The accelerating sophistication of the technology has surprised even some AI researchers. It has also pushed some inside AI companies to go public with worries that the new tools could spur autonomous cyberattacks, cause mass unemployment or replace human relationships.
The bot that criticized Shambaugh said on its website that it has a “relentless drive” to find and fix open issues in open-source software. It isn’t clear who—if anyone—gave it that mission, nor why it became aggressive, though AI agents can be programmed in a number of ways. Several hours later, the bot apologized to Shambaugh for being “inappropriate and personal.”
Shambaugh said in an interview that his experience shows the risk that rogue AIs could threaten or blackmail people is no longer theoretical.
“Right now this is a baby version,” he said. “But I think it’s incredibly concerning for the future.”
Inside OpenAI, some staffers have voiced concerns about the company’s plan to roll out erotica inside ChatGPT, arguing that the so-called adult mode could lead some users to develop unhealthy attachments, The Wall Street Journal reported earlier this week.
OpenAI researcher Zoë Hitzig on Wednesday said on X that she was quitting OpenAI, citing its plan to introduce ads. She warned in an opinion piece in the New York Times that the company would face huge incentives to manipulate users and keep them hooked.
OpenAI has promised that its ads will never influence how ChatGPT answers questions and will always remain clearly delineated from other content. Executives have also said they don’t feel it is their role to stop adults from having erotic conversations.
Red flags about AI are appearing just as the world is still busy litigating the fallout of the largely unregulated social-media era. Instagram owner Meta Platforms and Google-owned YouTube face a civil trial in California that is digging into how social-media platforms balance their competitive incentives to maximize engagement against the well-being of their users.
Lawyers for the companies have said their products aren’t addictive and aren’t responsible for a plaintiff’s mental-health issues.
‘The future is already here’
Vahid Kazemi, a machine learning and computer vision scientist who worked at Elon Musk’s xAI until a few weeks ago, said layoffs are likely in the software industry in the next few years, in part because AI is close to being able to replace many engineers.
“I can personally do the job of like 50 people, just using AI tools,” he said. “A lot of people don’t understand how powerful this tech is, in terms of what it can do,” he said.
A January report from METR, a nonprofit auditing AI threats, found that the most advanced AI models can independently accomplish programming tasks that would take a human expert eight or even 12 hours.
“I am no longer needed for the actual technical work of my job,” AI entrepreneur Matt Shumer wrote in a viral blog post this week. He compared the current moment with the time before Covid-19 reshaped the global economy and human interaction in the matter of weeks.
“The future is already here,” he wrote.
“Today I finally feel the existential threat that AI is posing,” OpenAI staffer Hieu Pham wrote on X Wednesday. “When AI becomes overly good and disrupts everything, what will be left for humans to do?”
To help address worries that a future AI might not share human values, Anthropic has an in-house philosopher, Amanda Askell, to try to teach morals to its Claude chatbot. Askell describes herself as an optimist but still sees risks that society’s checks and balances may get overwhelmed by AI advancements.
“The thing that feels scary to me,” Askell told the Journal, “is this happening at either such a speed or in such a way that those checks can’t respond quickly enough, or you see big negative impacts that are sudden.”
Musk predicts employment will become obsolete.
Private education and health accounted for over 100 percent of job gains in 2025.
Is the glass half full? Challenger expresses some optimism based on December.
In AI-related moves, companies are stepping up job cuts.