Photo-Illustration: Intelligencer; Photos: Getty Images

One thing you hear about a lot from the tiny group of extraordinarily wealthy and powerful people in charge of America’s AI companies is that, as the world sits on the cusp of potentially massive economic, social, and perhaps even spiritual transformation, it is time to figure this out together. “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species,” wrote Anthropic’s Dario Amodei earlier this year, suggesting that one way to make it through will be to “encourage coordination” at the level of “industry and society.” AI will be “the most beneficial technology ever created,” Google’s Demis Hassabis has said, “but only if we apply it in the right way and build it in the right way.” (Just as you can tell you’re reading AI-generated text from all the bullet points, or an insistence on describing everything as not x, but y, a telltale sign that you’re hearing from an AI executive is a pleading, tic-like overuse of collective pronouns.) “We (the whole industry, not just OpenAI) are building a brain for the world,” OpenAI’s Sam Altman explained in a post about the coming “gentle singularity,” which is why it’s important that “we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want.”

A lot of what we’re hearing about us is really about them, of course, and intends to signal – in the context of growing AI backlash, but also varying degrees of genuine personal angst and uncertainty — that they can be trusted to shepherd a technology that, if built and deployed the wrong way, they say could tear apart society, summon authoritarianism, or worse. It’s an awkward message. The public, according to numerous recent polls, finds it less appealing the more they hear it. You can blame AI’s image problem on a lot of things: Vague pressure to use it at work; suddenly abundant AI slop and spam; individually offputting and polarizing founders; ideological objections to how it’s trained and deployed; foreboding, energy-hungry data centers that communities are turning against across the country. Mostly, of course, it’s fear about jobs.

But there’s one factor undermining the messaging from Altman, Amodei, Hassabis, and others that is both underrated and, perhaps, a blind spot for the industry: A lot of these guys absolutely and obviously despise one another.

The AI industry is defined by research, technological breakthroughs, and billions of dollars of eager capital, sure, but also by petty resentments, estrangements, and raging blood-feuds, many of which have been building for years. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” wrote Sam Altman to Elon Musk in 2015, shortly after Google had acquired DeepMind. Given that it seemed like it would happen anyway, he wrote, “it seems like it would be good for someone other than Google to do it first.” Musk, who had told Altman that DeepMind was causing him “extreme mental stress” and that, should Google “win,” it would be “really bad news with their one mind to rule the world philosophy,” was receptive after recently failing to lure Hassabis to his constellation of companies instead. Soon, they became cofounders of OpenAI. By 2018, a bitter power struggle led to Musk cutting ties with OpenAI, leading to years of court battles, some still ongoing. Now, the men tweet openly about how much contempt they have for one another. (Altman on Musk: “I don’t think he’s, like, a happy person. I do feel for him.” Musk on Altman: “Scam Altman lies as easily as he breathes.”) Anthropic’s founding was the result of a core group of researchers and employees leaving OpenAI over concerns about its approach to safety, but also about Altman’s character specifically. (Amodei on Altman in 2021: “The problem with OpenAI is Sam himself.” In 2026, after OpenAI seized on Anthropic’s conflict with the Pentagon: Altman is telling “straight up lies” and “gaslighting.”) In 2023, Musk, now in possession of Twitter and a clearer public political identity, finally founded his own firm, xAI, to build a “maximum truth-seeking AI that tries to understand the nature of the universe,” but also because Sam Altman was making ChatGPT “woke,” which he said could be “deadly.” (Elaborating on the theme, and making sure not to miss anyone, Musk posted at Amodei earlier this year: “Your AI hates Whites & Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil.”)

There are alliances. Sort of. Amodei and Hassabis present a unified front and seem to assess their positions in similar ways; Musk and Mark Zuckerberg, whose talent war with OpenAI briefly spilled into public nastiness, found common ground against Altman. But the spectacle of the AI race, for all its staggering scale and existential trappings, is increasingly shaped by the sort of lurid recriminations and transparently human antipathies that are hard to avoid in an incestious industry in which most of the major firms were founded by people who didn’t trust the guy running the last one. Grievances and grudges appear to be trickling down and encrusting into corporate strategies and house communications styles. In a recent memo, OpenAI’s chief revenue officer assured investors that it could still beat a surging Anthropic, but also went a bit further. That company, she said, is “built on fear, restriction, and the idea that a small group of elites should control AI.”

This, again, isn’t the main factor influencing public perception of AI, inspiring state-level data-center bans, or even driving extremists to attempt to firebomb executives. The economic vibes are broadly terrible, hiring is slow, and some of the first major layoffs directly attributed to AI by executives came in the tech industry, which was supposed to be the safe sector. (And that’s what the new data center across town is going to be for? No thanks!) Nothing AI leaders can say about each other is even a fraction as damaging as the frequent suggestion that what they are all clearly doing — building models that, outside the AI industry, and X, look far more similar than different — might interfere with your livelihood, or worse, no matter how careful, or conscientious, or anxious they claim to be.

That said, the AI industry resembling a multi-trillion-dollar broken-up polycule can’t be helping. One influential anonymous X account run by an OpenAI employee —  speaking of interesting communications strategies! — worries that it might have some downsides:

the ai labs, in competing with each other, are burning huge amounts of the commons on public trust in ai to win minor points against the others. their lobbyists, pr machines, lawsuits. it’s the very opposite of what marxist class struggle analysis would tell you

— roon (@tszzl) April 14, 2026

From inside the industry, or even if you spend enough time steaming in the AI hothouses of X or LinkedIn, this map of intra-AI rivalries and vendettas is legible and, for some of these guys, ideologically coherent, rooted in old and substantive disagreements about how to build intelligent machines. From the outside, though, old, festering disagreements about alignment, AI safety, and novel corporate governance structures tend to lose a lot of texture, and the situation can be read, accurately if not necessarily sufficiently, as something simpler and more familiar: Another new industry in the midst of massive expansion, its investors desperate for upside, and its principal actors engaged in a ruthless land-grab and fight for dominance that feels, to them, like a matter of life or death. That fight is all in pursuit of an outcome that they’ve explained is 1) probably inevitable and 2) might be pretty bad, and which therefore sounds awfully predatory.

It can be deflating to re-imagine the AI boom as a more pedestrian business story with particularly colorful executives expressing contempt for their rivals and making things personal on the way to, say, packaged beverage dominance. But the maximally dysfunctional dynamics of the pre-takeoff AI industry can also be read as an early, bad sign about how things might play out for everyone else, which is to say: like they always do, but maybe worse. Here is a visible, prepared, and substantively aligned “small group of elites,” including a few of the richest people in the entire world, suggesting that it’s time to collectively “rethink the social contract” and warning that we’re about to be “tested as a species,” as they’re in the process of succumbing completely to crude, winner-take-all market logic, utterly failing to coordinate amongst themselves, fighting regulation with lobbyists, getting pissed as hell in public, and opening up a bunch of fronts in a total industrial war for scarce resources — power, compute, water — with immediate and unmitigated externalities. (Granted, comprehensive high-level coordination might look like something else people don’t particularly love: a cabal.) Individually, to receptive audiences, they can explain how all this happened and rationalize their own roles. To much of the rest of the world, though, they just look like a group of people who worried about building the thing and then couldn’t figure out not to, who cautioned against getting trapped in an arms race and then started one anyway. They see people warning about the speed of change as they step over one another to make it accelerate. They see people urging humility and accusing one another of having God complexes while engaging in a naked struggle for power.

It’s easy and even tempting to underestimate how serious some of the leading voices in AI are about some of the wilder things they say. But when they claim that their rivals prevailing would be apocalyptic they are unmistakably, at least, sincere. They understand themselves, to different extents, to be articulating vastly different visions of the future that hinge on subtly distinct technical, legal, and semi-theological choices made today. Back outside, though, they present as another familiar and unwelcome spectacle: A group of powerful men proclaiming, one after the other, that he alone can fix it.

Sign Up for John Herrman column alerts

Get an email alert as soon as a new article publishes.

Vox Media, LLC Terms and Privacy Notice