Analysis by Allison Morrow, CNN

hands on laptop

Photo: Thomas Lefebvre / Unsplash

“The world is in peril,” warned the former head of Anthropic’s Safeguards Research team as he headed for the exit. A researcher for OpenAI, similarly on the way out, said that the technology has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

They’re part of a wave of artificial intelligence researchers and executives who aren’t just leaving their employers – they’re loudly ringing the alarm bell on the way out, calling attention to what they see as bright red flags.

While Silicon Valley is known for high turnover, the latest churn comes as market leaders like OpenAI and Anthropic race toward IPOs that could turbocharge their growth while also inviting intense scrutiny of their operations.

In just the past few days, a number of high-profile AI staffers have decided to call it quits, with some explicitly warning that the companies they worked for are moving too fast and downplaying the technology’s shortcomings.

Zoë Hitzig, a researcher with OpenAI for the past two years, broadcast her resignation Wednesday in a New York Times essay, citing “deep reservations” about OpenAI’s emerging advertising strategy. Hitzig, who warned about ChatGPT’s potential for manipulating users, said that the chatbot’s archive of user data, built on “medical fears, their relationship problems, their beliefs about God and the afterlife,” presents an ethical dilemma precisely because people believed they were chatting with a program that had no ulterior motives.

Hitzig’s critique comes as the tech news site Platformer reports that OpenAI disbanded its “mission alignment” team, created in 2024 to promote the company’s goal of ensuring that all of humanity benefits from the pursuit of “artificial general intelligence” – a hypothetical AI capable of human-level thought.

OpenAI didn’t immediately respond to a request for comment.

Also this week, Mrinank Sharma, the head of Anthropic’s Safeguards Research team, posted a cryptic letter Tuesday announcing his decision to leave the company and warning that “the world is in peril.”

Sharma’s letter made only vague references to Anthropic, the company behind the Claude chatbot. He didn’t say why he was leaving but noted it was “clear to me that the time to move on has come” and that ” throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”

Anthropic told CNN in a statement that it was grateful for Sharma’s work advancing AI safety research. The company noted that he was not the head of safety nor was he in charge of broader safeguards at the company.

Meanwhile, at xAI, two co-founders quit in the span of 24 hours this week, announcing their departures on X. That leaves just half of xAI’s founders remaining at the firm, which is merging with Elon Musk’s SpaceX to create the world’s most valuable private company. At least five other xAI staff have announced their departures on social media over the past week.

It wasn’t immediately clear why the latest xAI cofounders left, and xAI didn’t respond to a request for comment. In social media post Wednesday, Musk said xAI was “reorganised” to speed up growth, which “unfortunately required parting ways with some people.”

While it’s not unusual for high-level talent to bounce around in an emerging industry like AI, the scale of the departures over such a short period at xAI stands out.

The startup has faced a global backlash over its Grok chatbot, which was allowed to generate nonconsensual pornographic images of women and children for weeks before the team stepped in to stop it. Grok has also been prone to generating antisemitic comments in responses to user prompts.

Other recent departures underscore the tension between some researchers worried about safety and top executives eager to generate revenue.

On Tuesday, The Wall Street Journal reported that OpenAI fired one of its top safety executives after she voiced opposition to the rollout of an “adult mode” that allows pornographic content on ChatGPT. OpenAI fired the safety executive, Ryan Beiermeister, on the grounds that she discriminated against a male employee – an accusation Beiermeister told the Journal was “absolutely false.”

OpenAI told the Journal that her firing was unrelated to “any issue she raised while working at the company.”

High-level defections have been part of the AI story since ChatGPT came on the market in late 2022. Not long after, Geoffrey Hinton, known as the “Godfather of AI,” left his role at Google and began evangelising about what he sees as existential risks AI poses, including massive economic upheaval in a world where many will “not be able to know what is true anymore.”

Doomsday predictions abound – including among AI executives who have a financial incentive to hype up the power of their own products. One of those predictions went viral this week, with HyperWrite CEO Matt Shumer posting a nearly 5000-word screed about how the latest AI models have already made some tech jobs obsolete.

“We’re telling you what already occurred in our own jobs,” he wrote, “and warning you that you’re next.”

-CNN