These developments are also intriguing in the context of the company’s streamlining pivots over the last few weeks. Last Tuesday, OpenAI announced the shuttering of its video-generation model, Sora, and the dissolution of its billion-dollar licensing deal with Disney—much to the entertainment company’s surprise. OpenAI also axed its controversial plans to release an erotic companion.

Meanwhile, the company has been reorganizing its safety-and-security efforts, and it announced that its OpenAI Foundation plans to spend $1 billion over the next year on medical research, AI resilience, and community programs. Even its product group was renamed to AGI Deployment.

These moves all seem to point to a company on the verge of…something. An IPO, which we know is scheduled for later this year? Falling irrevocably behind its competitors at Anthropic and Google? An actual technological breakthrough?

There’s also the rapidly approaching 2026 midterms—arguably the first election cycle in which AI and its ramifications will be truly top of mind, for American voters. Perhaps the company has woken up to the fact that AI’s dismal popularity ratings are bound to catch up with it in the form of harsh regulation.

OpenAI’s new policy proposals will target “societal issues as tech advances toward superintelligence.”

In general, phrases like “AI safety” and “AI risk” have become dirty words since Donald Trump took office a second time and “acceleration” became the Silicon Valley rallying cry. But the tides could be shifting back again, with some of the euphoria around Trump and his deregulatory paradigm starting to fade. In February, OpenAI poached safety researcher Dylan Scandinaro from Anthropic to lead its preparedness team, which now appears to be staffing up with roles focused on frontier biological and chemical risks, cybersecurity risks, and the ominously named “loss of control.”

Interestingly, OpenAI’s leadership has not exactly been walking in lockstep when it comes to politics. Achiam was last seen tweeting about how the “effort by the pro-AI lobby to torpedo Alex Bores will later on be widely understood as a pointless own-goal.” Many perceived that to be a slight against OpenAI president Greg Brockman, who has poured millions of dollars into a super PAC dedicated to attacking pro-regulation candidates like Bores.