Photo-Illustration: Intelligencer; Photo: Getty Images
Last month, OpenAI shut down Sora, its attempt at a social-media app, as part of a pivot away from a “do everything all at once” strategy that the company says left it on the “defensive” against companies like Anthropic. The story made sense. Claude Code was the new ChatGPT, and the race to build more capable coding and productivity models was the big prize. Then, this week, OpenAI made another announcement: It had acquired TBPN, the business-and-tech-centric video podcast. Fidji Simo, the former Meta executive who had been messaging the firm’s return-to-focus plan for weeks, tried to explain the rationale for a strange deal that nobody in the industry had seen coming:
As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company. We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.
I wouldn’t say that buying a friendly but independent outlet and immediately portraying it as an extension of your communications strategy is a sure thing, “conversation”-wise, but if you’ve watched TBPN’s fluent, chummy interviews with executives like Sam Altman, you can at least understand why they like it (after the purchase, Altman called it his “favorite tech show”). Still, the deal, valued in the “low hundreds of millions,” produced a lot of confusion. OpenAI was supposed to be avoiding “side-quests.” Why was it investing in podcasts?
One answer: Maybe the AI industry had seen the recent news about social media. Last week, Meta lost a pair of high-profile court cases centering on harms to young users, and plaintiffs, politicians, and commentators settled on a frame: It was social media’s “big tobacco” moment. A novel legal approach had finally panned out, potentially opening the floodgates for thousands more lawsuits and inviting new regulation. The juries, presented with the same technical arguments made in countless other courtrooms over the last decade, now seemed straightforwardly fed up with representatives of social-media companies that profess to be careful and thoughtful about dealing with young people while, in private, as jurors heard, routinely shared messages like, “If we wanna win big with teens, we must bring them in as tweens.”
They also may have noticed how the public responded with a combination of relief and celebration. By the time these verdicts came down, school phone bans had swept across the country, and state-level attempts to ban minors from using social media were following closely behind. Public sentiment had already turned, in a broad and visceral way, against social media, and not just around kids. You could sense this in Meta’s narrow, helpless, and almost annoyed public-relations strategy in the week since the news. Andy Stone, its head of comms, reshared posts arguing that the California case was “a victory for the plaintiffs bar — not for children or society” and that it was a “blow against free speech.” He boosted multiple people taking issue with the whole “tobacco” thing: a Reason podcast lamenting social-media “prohibitionism” and stating the fact that tobacco is a “chemical” while social media is, actually, in fact, I think you’ll find, a “delivery system for speech,” and, yes, a TBPN post suggesting to readers that “it might be worth revisiting what exactly is addictive about cigarettes.”
“As people compare social media to the cigarette industry, it might be worth revisiting what exactly is addictive about cigarettes. Nicotine…
…it certainly seems like what pulls people into social media is more the humans that create content on the platform.”
–@johncoogan https://t.co/dG9o3LYYkF
— Andy Stone (@andymstone) March 30, 2026
Whatever the merits of their defenses — and, as Mike Masnick at TechDirt argues, there are more than some online-safety advocates are comfortable admitting — they have lately been, to be blunt, losing the public debate. The legal and rhetorical framework within which Meta and others long built their businesses — the laws protecting platforms from liability aren’t perfect, but without them, the internet as we know it couldn’t exist — suddenly tells us less about the possible futures of social media than a single unguarded quote from a Los Angeles juror, who told reporters after the verdict, “We wanted them to feel it. We wanted them to realize this was unacceptable.” No need to be too specific about what “this” is — everyone gets the idea.
Here, for people like Altman, is a glimpse of the future: Nobody wants to hear from social-media companies, while everyone wants something to be done to them. This punishing dynamic will consume their next decade in the form of rolling public-relations crises, lawsuits, regulations, and law, which they will have to deal with in the manner of other entrenched and unpopular industries, with lobbyists and lawyers, rather than as privileged stewards of the economy’s most exceptional story.
Today, AI leaders still command immense amounts of credulous attention, and their predictions — alongside leaks from their companies, cryptic posts from researchers, and viral X essays galore — are largely driving the story. But what about tomorrow? In the weeks before social media’s “tobacco moment,” there were already signs of angst surfacing in the AI industry about its loss of narrative control. Anthropic had publicly clashed with a right-wing Department of War that wanted control over its technology and for the company’s founder to sit down and shut up. Meanwhile, the most popular left-wing politicians in the country were suddenly advocating for an extreme AI “pause” that would halt development in its tracks, citing AI leaders’ own warnings about its risks. Something seemed to be shifting as the technology accelerated. Maybe it really is time to buy a podcast.
For social-media companies, the path from “connecting the world” to “we wanted them to feel it” was, at least as tech timelines go, pretty long — long enough that Mark Zuckerberg was blamed for the election of not one but two of the least popular presidents in modern history— and gave companies like Meta a lot of room to make a lot of money and do a lot of lobbying. For AI, however, the journey could be much, much shorter and quite a bit more brutal.
🚨 Anthropic CEO Dario Amodei: “We are so close to these models reaching the level of human intelligence, and yet there doesn’t seem to be a wider recognition in society of what’s about to happen … There hasn’t been a public awareness of the risks.” pic.twitter.com/9OuiTem3ce
— Chief Nerd (@TheChiefNerd) March 30, 2026
When, as above, Amodei expresses disappointment that “there doesn’t seem to be a wider recognition in society of what’s about to happen,” he’s referring to a diverse range of risks and worries about which he’s written and spoken extensively, in manifestos, at conferences, and during dinners with journalists and policymakers. But while he couldn’t be much more different a character or narrator from Meta’s frustrated head of comms, he’s posing a (more currently sympathetic, consequential, and earnest) version of the same question: We know something about how our technology works — why aren’t people listening to us?
For social media, the answer is intuitive: People take it for granted, feel ambivalent about their own usages, and associate the companies with scandal and their leaders with extreme wealth and politics. As a result, it has drifted into the same cursed space as “American health care,” “the media,” “education,” or even “the economy” as something that’s seen as broken, moving in the wrong direction, and where even users who report positive personal experiences assess the general situation as bad and getting worse. (Political scientists call this a gap between “egotropic” and “sociotropic evaluations” — an “I’m fine, but we’re not” divide — and it maps to some of America’s most truly fucked governance priorities.) There are signals that AI is more or less starting out there. People basically like ChatGPT but also think that AI is going to be really bad for the economy. They enjoy using AI tools to make their lives easier but find it disconcerting when, for example, their managers mandate usage. (This dynamic is exaggerated, of course, by social media.)
Where social-media companies spent years reacting to outcomes and consequences they could claim were unforeseen, AI leaders have practically been forecasting their own vilification, warning that increased adoption will produce destabilizing effects, that it will cause unemployment, that society isn’t ready for what’s coming, and that it will all happen faster than almost anyone expects.
Plenty of investors are listening to people like Amodei, of course, acting on his predictions and in some sense taking his advice — but only the parts that they believe might make them money. It’s when he starts talking about AI-powered bioterrorism, out-of-control mass surveillance, and white-collar job automation that the responses don’t seem, to him, to match the gravity of what he’s saying. Part of this is surely down to audiences: Investors move fast to seek advantage, while democratic governments take a long time to metabolize new technology and its downstream consequences. And part of it is surely because nobody in positions of power quite knows what you’re supposed to do, or where to even start, when someone says that they’re worried they’re about to eliminate, on the low end, a few tens of millions of jobs, perhaps on the way to summoning the apocalypse.
But Amodei’s disappointment isn’t just about what people aren’t doing. It’s about what they are. Last month, the company got into a fight with the Department of War over how its technology could be used in, well, war. Anthropic, claiming a unique understanding of both Claude’s limitations and its future risks and potential, wanted assurances about surveillance and autonomous killing. Pete Hegseth, alongside deputies like Emil Michael, more or less told them to fuck themselves, called them weird, and tried to designate the company a supply-chain risk.
DoD official Emil Michael on designating Anthropic a supply chain risk — “Their model has a soul, a ‘constitution’ — not the US Constitution. The other day their model was ‘anxious’ and they believe it has a 20% chance of being sentiment and having its own ability to make… pic.twitter.com/D1aPSJYTaJ
— Aaron Rupar (@atrupar) March 12, 2026
Within a few weeks, AI companies saw some more assertive behavior on the political left in the form of proposed legislation from Bernie Sanders and Alexandria Ocasio-Cortez that would pause the construction of new data centers:
Bernie Sanders and AOC just proposed legislation to freeze AI data center construction until strong national safeguards are in place.
We’re all for preventing the rise of the machines, but hard not to wonder what this does to America’s shot at locking down AI dominance while… pic.twitter.com/2SRAw0nmeE
— Lark Davis (@LarkDavis) March 26, 2026
“We cannot sit back and allow a handful of billionaire Big Tech oligarchs to make decisions that will reshape our economy, our democracy and the future of humanity,” Sanders wrote of the bill. “We need serious public debate and democratic oversight over this enormously consequential issue. The time for action is now. We need a federal moratorium on AI data centers.”
You can align these episodes with existing schools of thought in the AI world if you want to. We can imagine that Hegseth’s advisers see AI companies as powerful and useful but not categorically different from other contractors they’ve worked with, and are far more worried about military and industrial supremacy than they are runaway capabilities; likewise, we can note that Sanders’s bill was influenced by, or at least rolled out alongside, his conversation with Eliezer Yudkowsky, an AI thinker with whom Amodei, for example, is deeply familiar, who recently published a book called If Anyone Builds It, Everyone Dies, and who has been calling to “shut it all down” for years.
But the thing these substantively and theoretically opposed approaches have in common is that they mostly don’t come from the AI industry, don’t match the terms of debate set out and favored by people like Altman and Amodei, and represent preemptive forms of the sort of broad, intense, and conceptually slippery backlash that took years to consume social media. These are not just evolved varieties of old AI risk and alignment conversations — they’re forceful responses to change from articulated by people who, until recently, weren’t part of the AI conversation at all. Put another way: The window during which the world will look to people building AI for advice about how to respond to AI seems to be closing fast, even (and especially) as they make an urgent case for cooperation.
Google DeepMind, OpenAI, and Anthropic have been funding and sharing anticipatory research about the societal effects of AI — and are hiring more people for the job — since well before the general public started conceptualizing its use as a potential problem. In some cases they advocated for proactive regulation, but they also positioned themselves as partners to governments on plans for crossing over some sort of general, society-wide AI threshold. If recent events are any guide, though, the near future might not be about collectively figuring out with Anthropic how to best amend the “constitution” the company has written for Claude, or summoning Google’s “post-AGI” economists for friendly consultation with congressional committees. Instead, it could follow — driven by public opinion and, eventually, maybe, representative democracy — hyperpoliticized and reactive tracks that, to industry figures and policy experts steeped in theories of scaling and AI governance, will probably seem ill-informed, wrong-headed, unsophisticated, or vulgar.
I wrote that the polling on *AI* (as opposed to data centers) was pretty positive, but all the new polling that’s come out since my article has shown the public turning negative. https://t.co/2ovkamopvO pic.twitter.com/mqgFWOg2Uu
— Matthew Yglesias (@mattyglesias) March 30, 2026
You can read some of this angst in the TBPN news. And as someone building AI, or who has already been made extremely wealthy by an AI startup, you may be starting to get the feeling that you won’t get much credit for telling people ahead of time, in a concerned tone, that your technology might soon interrupt or ruin their lives, especially if it actually does. You’re still the guy whose plan was to do that, and it shouldn’t be a surprise if politicians, rather than seeking your counsel, decide to seek some distance from you. You might worry about getting Zuckified, or worse.
Politicians – especially Dems – should pledge not to take AI money.
They are buying up influence ahead of the midterms, and Dems who take AI $ will lose authority and trust as the public bears the cost.
Their money will end up being toxic anyway. People are catching on.
— Alexandria Ocasio-Cortez (@AOC) March 26, 2026
The American political reaction to AI is rapidly changing shape. The MAGA right is already reconsidering how its deregulatory support of the AI industry, which last year took the form of patriotic arms-race rhetoric and public events with CEOs (who, in turn, expressed support for and given money to Trump), might look going forward. This will make them a target for politicians left and center, who are better positioned to not just oppose a hypervisible accumulation of capital against labor but, more relevantly to how American politics actually works, have a wide-open opportunity to associate the specter of AI transformation and job displacement with Republicans and to force it, at least temporarily, into a partisan frame. “But what if the effects of AI are more extreme in scope and scale than the apps that merely rerouted the world’s sociality through an ad network,” you ask? Too bad: Our (social-media-degraded!) political environment has limited patience for things like that. It’s no wonder Amodei already sounds a bit defeated. Altman still thinks he might be able to get in front of things:
new funding, new model, new policy push.@OpenAI will begin releasing a series of policy proposals next week meant to spark conversation about how to “rethink the social contract.” 2026 is gonna be an interesting year.https://t.co/wOzsY43oNW
— Julia Black (@mjnblack) April 1, 2026
OpenAI’s “Industrial policy for the Intelligence Age,” a set of proposals mingling plans to coordinate against AI security threats with outlines of redistributive government programs, is a sensible (if vague) document made incredibly strange by its source: Maybe we see evidence here of a different sort of AI bubble, one in which Sam Altman seems to believe that America is waiting on people like him to start a conversation about renegotiating the social contract, rather than a country spring-loaded for a tech backlash to end all tech backlashes — the same bubble where, last year, as few will recall, Zuckerberg, trying out a new identity as an AI CEO, wrote a manifesto about how “superintelligence is now in sight.”
But in OpenAI’s substantial and much quieter lobbying efforts around, among many other things, “child safety” and data-center deregulation — and perhaps in its nascent new media strategy — there’s evidence of a more paranoid and ruthless approach to managing the story. AI won’t just have a “big tobacco” moment, a point at which years of misleading marketing, personal angst, and visible harms gradually build to a moment of reckoning. It’s likely to have something more intense, and soon, arriving ahead of, not after, its diffusion through society and the economy. The last story the AI firms will have been in control of will be the one where they said they were about to change everything. It helped them raise a lot of money to build a lot of data centers and train better models. What happens next might not be up to them at all.
Sign Up for John Herrman column alerts
Get an email alert as soon as a new article publishes.
Vox Media, LLC Terms and Privacy Notice