When Sam Altman co-founded OpenAI as a non-profit in 2015, with the altruistic ambition of ensuring AI “benefits all of humanity”, he might not have imagined that just over a decade later, his work would provoke someone to throw a Molotov cocktail at his home.

It was the first of two separate incidents at the tech entrepreneur’s San Francisco residence in recent days, with gunshots reported in the early hours of Sunday morning. While the motives are still being investigated, one of the suspects appears to have made his position clear in posts to Discord and Substack, as well as in a lengthy manifesto.

On the PauseAI Discord server, which advocates for non-violent reform to AI development, he referred to himself as “Butlerian Jihadist”, named after a crusade in Frank Herbert’s Dune series to destroy “thinking machines”. He wrote about the “existential risk” AI poses – using the exact same language Altman has frequently used to describe the technology his company is developing.

“Whenever a more advanced human civilisation has made contact with a less advanced one, the less advanced group is often met by conquest and genocide,” the suspect wrote on Substack. “So why the hell would we knowingly do this?”

He described it as a “war” for humanity, referencing the forces of “good and evil”. For him, the likes of Altman are traitors to their species.

It offers one person’s perspective on the current state of AI development, but more telling has been the online reaction to the incidents.

The rapidly growing AntiAI community on Reddit, which now counts more than half a million members, was filled with posts expressing understanding for the attacks. The top post stated: “[Altman] should probably stop threatening to cause the apocalypse.”

On Instagram, some users went even further by lamenting the attacks’ lack of success – Altman and his family were unharmed – and encouraging others to carry out more. It has inevitably drawn comparisons to the shooting of the United Healthcare CEO in 2024, which saw widespread support for the alleged perpetrator, Luigi Mangione, due to deep-rooted anger against American health insurance companies.

Similar frustrations against AI firms are now building. After various petitions from researchers to stem the development of powerful AI systems, there have now been several recent studies revealing a notable shift in public perception towards the technology and those in control of it.

OpenAI CEO Sam Altman, whose San Francisco home was targeted last weekOpenAI CEO Sam Altman, whose San Francisco home was targeted last week (Getty)

In Stanford University’s 2026 AI Index Report, published on Monday, more than half of the people surveyed said that products using AI made them feel nervous. A Gallup poll of Gen Z attitudes towards AI adoption, published earlier this month, found that excitement has dropped by roughly the same margin that anger has risen. There were similar findings in a report from Pew Research Center last month, which also confirmed that AI insiders are far more enthusiastic about AI than the general public.

The AI backlash appears to stem not only from fears about a theoretical superintelligence, but also from the everyday impact the technology is having on society. The Stanford report noted that public opinion on AI is increasingly disconnected from the views of experts, with people worried that it will hurt everything from the economy and elections to mental health and relationships.

“I think a lot of AI leaders are just out of touch with normal people and don’t realise that fears of Skynet [some kind of evil superintelligence] are not what is primarily driving anti-AI sentiment,” said US-based behavioural scientist Caroline Orr Bueno. “That exists, obviously, but most people are way more concerned with their paycheck and the cost of utilities.”

These frustrations have led to an escalating trend of direct action against the companies developing AI. Following the early petitions and open letters – whose signatories often included Altman and fellow OpenAI co-founders like Elon Musk – there came a growing number of protests.

People have taken to the streets in cities around the world, including London, Paris and New York, calling for stricter safety guardrails for AI and more protections for jobs and the climate. In San Francisco, mobs have smashed self-driving robotaxis and set them on fire, while others have carried out hunger strikes outside the offices of big AI firms.

“These AIs are being used to inflict serious harm on our society today and threaten to inflict greater damage tomorrow,” anti-AI campaigner Guido Reichstadter, who went on a month-long hunger strike outside Anthropic’s headquarters last September, wrote in a blog post. “We are in an emergency. Let us act as if this emergency is real.”

A Waymo driverless robotaxi was torched in San Francisco on 10 February, 2024A Waymo driverless robotaxi was torched in San Francisco on 10 February, 2024 (YouTube/Frisco Live 415)

Aggressive acts of defiance remain rare, but appear to be increasing in frequency since late last year. AI labs have been vandalised, factories have been targeted by arsonists, and politicians have received death threats for supporting the construction of AI data centres.

But beyond the extreme incidents is a growing and legitimate movement. The PauseAI group has said it “unequivocally condemns” all forms of violence, intimidation and harassment, but said that it should not be used to “paint the broader movement for AI safety as dangerous or extremist.”

Following the violent attacks on his home, Altman shared an uncommonly personal blog post. He blamed an “incendiary article”, referencing a 16,000-word piece in the New Yorker last week, which questioned whether he could be trusted at the helm of such a powerful company at such a pivotal moment. “I am awake in the middle of the night and p*ssed, and thinking that I have underestimated the power of words and narratives,” he wrote.

He also addressed the disillusionment that appears increasingly apparent among those outside the tech elite – yet he offered no real solutions to how such problems can be solved. One solution, which appears impossible now that Altman is reportedly preparing his company for a $1 trillion public offering, would be to return to OpenAI’s founding principles.

Buried within OpenAI’s website can be found its original mission statement. “Since our research is free from financial obligations, we can better focus on a positive human impact,” it stated. In his latest blog post, a decade on, Altman wrote: “The world deserves huge amounts of AI and we must figure out how to make it happen.”

With backlash now escalating from petitions to petrol bombs, his words are unlikely to ease anxieties about the future of AI.