{"id":387475,"date":"2026-04-19T16:35:17","date_gmt":"2026-04-19T16:35:17","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/387475\/"},"modified":"2026-04-19T16:35:17","modified_gmt":"2026-04-19T16:35:17","slug":"why-all-the-ai-leaders-hate-one-another","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/387475\/","title":{"rendered":"Why All the AI Leaders Hate One Another"},"content":{"rendered":"<p>                  <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/04\/e006b470c518ccc51995880e2c7acfd9a7-AI-guys-4-moshed-04-16-17-16-14-304.rhorizontal.w1100.jpg\" class=\"lede-image\" data-content-img=\"\" width=\"1100\" height=\"733\" style=\"width:100%;height:auto;\" fetchpriority=\"high\"\/> <\/p>\n<p>\n                  Photo-Illustration: Intelligencer; Photos: Getty Images\n              <\/p>\n<p class=\"clay-paragraph_drop-cap\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1wxjtj000j0if76bjdtmmm@published\" data-word-count=\"224\">One thing you hear about a lot from the tiny group of extraordinarily wealthy and powerful people in charge of America\u2019s AI companies is that, as the world sits on the cusp of potentially massive economic, social, and perhaps even spiritual transformation, it is time to figure this out together. \u201cI believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species,\u201d <a href=\"https:\/\/nymag.com\/intelligencer\/article\/dario-amodeis-warnings-about-ai-are-about-politics-too.html\" rel=\"nofollow noopener\" target=\"_blank\">wrote<\/a> Anthropic\u2019s Dario Amodei earlier this year, suggesting that one way to make it through will be to \u201cencourage coordination\u201d at the level of \u201cindustry and society.\u201d AI will be \u201cthe most beneficial technology ever created,\u201d Google\u2019s Demis Hassabis has <a href=\"https:\/\/time.com\/collections\/time100-ai-2024\/7012767\/demis-hassabis\/\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a>, \u201cbut only if we apply it in the right way and build it in the right way.\u201d (Just as you can tell you\u2019re reading AI-generated text from all the bullet points, or an insistence on describing everything as not x, but y, a telltale sign that you\u2019re hearing from an AI executive is a pleading, tic-like overuse of collective pronouns.) \u201cWe (the whole industry, not just OpenAI) are building a brain for the world,\u201d OpenAI\u2019s Sam Altman <a href=\"https:\/\/blog.samaltman.com\/the-gentle-singularity\" rel=\"nofollow noopener\" target=\"_blank\">explained<\/a> in a post about the coming \u201cgentle singularity,\u201d which is why it\u2019s important that \u201cwe can robustly guarantee that we get AI systems to learn and act towards what we collectively really want.\u201d<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlbg001z3b7b4a7bscqs@published\" data-word-count=\"138\">A lot of what we\u2019re hearing about us is really about them, of course, and intends to signal \u2013\u00a0in the context of <a href=\"https:\/\/www.newyorker.com\/culture\/infinite-scroll\/ai-has-a-message-problem-of-its-own-making\" rel=\"nofollow noopener\" target=\"_blank\">growing AI backlash<\/a>, but also varying degrees of genuine personal angst and uncertainty \u2014\u00a0that they can be trusted to shepherd a technology that, if built and deployed the wrong way, they say could tear apart society, summon authoritarianism, or worse. It\u2019s an awkward message. The public, according to <a href=\"https:\/\/poll.qu.edu\/poll-release?releaseid=3955\" rel=\"nofollow noopener\" target=\"_blank\">numerous<\/a> <a href=\"https:\/\/www.nbcnews.com\/politics\/politics-news\/poll-majority-voters-say-risks-ai-outweigh-benefits-rcna262196\" rel=\"nofollow noopener\" target=\"_blank\">recent<\/a> polls, finds it less appealing the more they hear it. You can blame AI\u2019s image problem on a lot of things: Vague <a href=\"https:\/\/nymag.com\/intelligencer\/article\/the-ai-warnings-shopify-fiverr-memo.html\" rel=\"nofollow noopener\" target=\"_blank\">pressure<\/a> to use it at work; suddenly abundant AI slop and spam; individually <a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/04\/13\/sam-altman-may-control-our-future-can-he-be-trusted\" rel=\"nofollow noopener\" target=\"_blank\">offputting<\/a> and <a href=\"https:\/\/nymag.com\/intelligencer\/article\/elon-musks-grokipedia-is-a-warning.html\" rel=\"nofollow noopener\" target=\"_blank\">polarizing<\/a> founders; ideological objections to how it\u2019s trained and deployed; foreboding, energy-hungry data centers that communities are <a href=\"https:\/\/www.washingtonpost.com\/business\/2026\/04\/15\/data-centers-poll-virginia\/\" rel=\"nofollow noopener\" target=\"_blank\">turning against<\/a> across the country. Mostly, of course, it\u2019s fear about jobs.<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzldi00213b7bqrs3m1bc@published\" data-word-count=\"36\">But there\u2019s one factor undermining the messaging from Altman, Amodei, Hassabis, and others that is both underrated and, perhaps, a blind spot for the industry: A lot of these guys absolutely and obviously despise one another.<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlfq00223b7bfpeij77i@published\" data-word-count=\"348\">The AI industry is defined by research, technological breakthroughs, and billions of dollars of eager capital, sure, but also by petty resentments, estrangements, and raging blood-feuds, many of which have been building for years. \u201cBeen thinking a lot about whether it\u2019s possible to stop humanity from developing AI,\u201d <a href=\"https:\/\/www.techemails.com\/p\/elon-musk-and-openai\" rel=\"nofollow noopener\" target=\"_blank\">wrote<\/a> Sam Altman to Elon Musk in 2015, shortly after Google had acquired DeepMind. Given that it seemed like it would happen anyway, he wrote, \u201cit seems like it would be good for someone other than Google to do it first.\u201d Musk, who had <a href=\"https:\/\/x.com\/TechEmails\/status\/1893744624620970351\" rel=\"nofollow\">told<\/a> Altman that DeepMind was causing him \u201cextreme mental stress\u201d and that, should Google \u201cwin,\u201d it would be \u201creally bad news with their one mind to rule the world philosophy,\u201d was receptive after recently <a href=\"https:\/\/finance.yahoo.com\/sectors\/technology\/articles\/why-demis-hassabis-ignored-elon-122441020.html\" rel=\"nofollow noopener\" target=\"_blank\">failing<\/a> to lure Hassabis to his constellation of companies instead. Soon, they became cofounders of OpenAI. By 2018, a <a href=\"https:\/\/www.businessinsider.com\/history-of-elon-musk-and-sam-altman-relationship-feuds-2023-3\" rel=\"nofollow noopener\" target=\"_blank\">bitter power struggle<\/a> led to Musk cutting ties with OpenAI, leading to years of court battles, some still ongoing. Now, the men tweet openly about how much contempt they have for one another. (Altman on Musk: \u201cI don\u2019t think he\u2019s, like, a happy person. I do feel for him.\u201d Musk on Altman: \u201cScam Altman lies as easily as he breathes.\u201d) Anthropic\u2019s founding was the result of a core group of researchers and employees leaving OpenAI over concerns about its approach to safety, but also about Altman\u2019s character specifically. (Amodei on Altman <a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/04\/13\/sam-altman-may-control-our-future-can-he-be-trusted\" rel=\"nofollow noopener\" target=\"_blank\">in 2021<\/a>: \u201cThe problem with OpenAI is Sam himself.\u201d In 2026, after OpenAI seized on Anthropic\u2019s conflict with the Pentagon: Altman is telling \u201cstraight up lies\u201d and \u201cgaslighting.\u201d) In 2023, Musk, now in possession of Twitter and a clearer public political identity, finally founded his own firm, xAI, to build a \u201cmaximum truth-seeking AI that tries to understand the nature of the universe,\u201d but also because Sam Altman was making ChatGPT \u201cwoke,\u201d which he said could be \u201cdeadly.\u201d (Elaborating on the theme, and making sure not to miss anyone, Musk posted at Amodei earlier this year: \u201cYour AI hates Whites &amp; Asians, especially Chinese, heterosexuals and men. This is misanthropic and evil.\u201d)<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlhj00233b7btakhgt2c@published\" data-word-count=\"159\">There are alliances. Sort of. Amodei and Hassabis present a unified front and seem to assess their positions in similar ways; Musk and Mark Zuckerberg, whose <a href=\"https:\/\/nymag.com\/intelligencer\/article\/how-meta-became-uniquely-toxic-for-top-ai-talent.html\" rel=\"nofollow noopener\" target=\"_blank\">talent<\/a> war with OpenAI briefly spilled into public nastiness, found <a href=\"https:\/\/fortune.com\/2026\/03\/31\/elon-musk-mark-zuckerberg-doge-openai-takeover-court-documents\/\" rel=\"nofollow noopener\" target=\"_blank\">common ground<\/a> against Altman. But the spectacle of the AI race, for all its staggering scale and existential trappings, is increasingly shaped by the sort of lurid recriminations and transparently human antipathies that are hard to avoid in an incestious industry in which most of the major firms were founded by people who didn\u2019t trust the guy running the last one. Grievances and grudges appear to be trickling down and encrusting into corporate strategies and house communications styles. In a recent memo, OpenAI\u2019s chief revenue officer assured investors that it could still beat a surging Anthropic, but also went a bit further. That company, she <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/911118\/openai-memo-cro-ai-competition-anthropic\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a>, is \u201cbuilt on fear, restriction, and the idea that a small group of elites should control AI.\u201d<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlna00243b7bez191kkw@published\" data-word-count=\"134\">This, again, isn\u2019t the main factor influencing public perception of AI, inspiring <a href=\"https:\/\/www.washingtonpost.com\/nation\/2026\/04\/14\/maine-bans-data-centers\/\" rel=\"nofollow noopener\" target=\"_blank\">state-level data-center bans<\/a>, or even driving extremists to attempt to firebomb executives. The economic vibes are broadly terrible, hiring is slow, and some of the first major layoffs directly attributed to AI by executives came in the tech industry, which was supposed to be the safe sector. (And that\u2019s what the new data center across town is going to be for? No thanks!) Nothing AI leaders can say about each other is even a fraction as damaging as the frequent suggestion that what they are all clearly doing \u2014\u00a0building models that, outside the AI industry, and X, look far more similar than different \u2014\u00a0might interfere with your livelihood, or worse, no matter how careful, or conscientious, or anxious they claim to be.<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlpc00253b7b3e95fx1r@published\" data-word-count=\"37\">That said, the AI industry resembling a multi-trillion-dollar broken-up polycule can\u2019t be helping. One influential anonymous X account run by an OpenAI employee \u2014\u00a0 speaking of interesting communications strategies! \u2014 worries that it might have some downsides:<\/p>\n<p lang=\"en\" dir=\"ltr\">the ai labs, in competing with each other, are burning huge amounts of the commons on public trust in ai to win minor points against the others. their lobbyists, pr machines, lawsuits. it\u2019s the very opposite of what marxist class struggle analysis would tell you<\/p>\n<p>\u2014 roon (@tszzl) <a href=\"https:\/\/twitter.com\/tszzl\/status\/2044190488408785282?ref_src=twsrc%5Etfw\" rel=\"nofollow noopener\" target=\"_blank\">April 14, 2026<\/a><\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlr900263b7bzy7pg0rj@published\" data-word-count=\"155\">From inside the industry, or even if you spend enough time steaming in the AI hothouses of X or LinkedIn, this map of intra-AI rivalries and vendettas is legible and, for some of these guys, ideologically coherent, rooted in old and substantive disagreements about how to build intelligent machines. From the outside, though, old, festering disagreements about alignment, AI safety, and novel corporate governance structures tend to lose a lot of texture, and the situation can be read, accurately if not necessarily sufficiently, as something simpler and more familiar: Another new industry in the midst of massive expansion, its investors desperate for upside, and its principal actors engaged in a ruthless land-grab and fight for dominance that feels, to them, like a matter of life or death. That fight is all in pursuit of an outcome that they\u2019ve explained is 1) probably inevitable and 2) might be pretty bad, and which therefore sounds awfully predatory.<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlt100273b7bp4g9etzl@published\" data-word-count=\"286\">It can be deflating to re-imagine the AI boom as a more pedestrian business story with particularly colorful executives expressing contempt for their rivals and making things personal on the way to, say, packaged beverage dominance. But the maximally dysfunctional dynamics of the pre-takeoff AI industry can also be read as an early, bad sign about how things might play out for everyone else, which is to say: like they always do, but maybe worse. Here is a visible, prepared, and substantively aligned \u201csmall group of elites,\u201d including a few of the richest people in the entire world, suggesting that it\u2019s time to collectively \u201c<a href=\"https:\/\/www.vanityfair.com\/news\/story\/openai-new-model-superintelligence-policy-push?srsltid=AfmBOoq0U5UVGGhrygeEEUIxwr9BDuwy1tInDlpj7J12dN2xq8Mq5yZW\" rel=\"nofollow noopener\" target=\"_blank\">rethink the social contract<\/a>\u201d and warning that we\u2019re about to be \u201ctested as a species,\u201d as they\u2019re in the process of succumbing completely to crude, winner-take-all market logic, utterly failing to coordinate amongst themselves, fighting regulation with lobbyists, getting pissed as hell in public, and opening up a bunch of fronts in a total industrial war for scarce resources \u2014\u00a0power, compute, water \u2014\u00a0with immediate and unmitigated externalities. (Granted, comprehensive high-level coordination might look like something else people don\u2019t particularly love: a cabal.) Individually, to receptive audiences, they can explain how all this happened and rationalize their own roles. To much of the rest of the world, though, they just look like a group of people who worried about building the thing and then couldn\u2019t figure out not to, who cautioned against getting trapped in an arms race and then started one anyway. They see people warning about the speed of change as they step over one another to make it accelerate. They see people urging humility and accusing one another of having God complexes while engaging in a naked struggle for power.<\/p>\n<p class=\"clay-paragraph\" data-editable=\"text\" data-uri=\"nymag.com\/intelligencer\/_components\/clay-paragraph\/instances\/cmo1xzlv500283b7b9fyfkybt@published\" data-word-count=\"96\">It\u2019s easy and even tempting to underestimate how serious some of the leading voices in AI are about some of the wilder things they say. But when they claim that their rivals prevailing would be apocalyptic they are unmistakably, at least, sincere. They understand themselves, to different extents, to be articulating vastly different visions of the future that hinge on subtly distinct technical, legal, and semi-theological choices made today. Back outside, though, they present as another familiar and unwelcome spectacle: A group of powerful men proclaiming, one after the other, that he alone can fix it.<\/p>\n<p>          Sign Up for John Herrman column alerts<\/p>\n<p>Get an email alert as soon as a new article publishes.<\/p>\n<p>        Vox Media, LLC Terms and Privacy Notice<\/p>\n<p class=\"expanded-terms \" aria-hidden=\"true\">By submitting your email, you agree to our <a href=\"https:\/\/nymag.com\/newyork\/terms\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Terms<\/a> and <a href=\"https:\/\/nymag.com\/newyork\/privacy\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Privacy Notice<\/a> and to receive email correspondence from us.<\/p>\n<p>    <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Photo-Illustration: Intelligencer; Photos: Getty Images One thing you hear about a lot from the tiny group of extraordinarily&hellip;\n","protected":false},"author":2,"featured_media":387476,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,23119,111,139,69,7542,145],"class_list":{"0":"post-387475","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-john-herrman","12":"tag-new-zealand","13":"tag-newzealand","14":"tag-nz","15":"tag-screen-time","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/387475","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=387475"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/387475\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/387476"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=387475"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=387475"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=387475"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}