{"id":321584,"date":"2026-03-04T15:59:13","date_gmt":"2026-03-04T15:59:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/321584\/"},"modified":"2026-03-04T15:59:13","modified_gmt":"2026-03-04T15:59:13","slug":"anthropic-vs-openai-vs-the-pentagon-the-ai-safety-fight-shaping-our-future","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/321584\/","title":{"rendered":"Anthropic vs. OpenAI vs. the Pentagon: the AI safety fight shaping our future"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">America\u2019s AI industry isn\u2019t just divided by competing interests, but also by conflicting worldviews.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In Silicon Valley, opinion about how artificial intelligence should be developed and used \u2014 and regulated \u2014 runs the gamut between two poles. At one end lie \u201c<a href=\"https:\/\/www.nytimes.com\/2023\/12\/10\/technology\/ai-acceleration.html\" rel=\"nofollow noopener\" target=\"_blank\">accelerationists<\/a>,\u201d who believe that humanity should expand AI\u2019s capabilities as quickly as possible, unencumbered by overhyped safety concerns or government meddling.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1 _1lbxzst7\">\u2022 Leading figures at Anthropic and OpenAI disagree about how to balance the objectives of ensuring AI\u2019s safety and accelerating its progress.<br \/>\u2022 Anthropic CEO Dario Amodei believes that artificial intelligence could wipe out humanity, unless AI labs and governments carefully guide its development.<br \/>\u2022 Top OpenAI investors argue these fears are misplaced and slowing AI progress will condemn millions to needless suffering.<br \/>\u2022 Unless the government robustly regulates the industry, Anthropic may gradually become more like its rivals.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">At the other pole sit \u201c<a href=\"https:\/\/www.vox.com\/future-perfect\/461680\/if-anyone-builds-it-yudkowsky-soares-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">doomers<\/a>,\u201d who think AI development is all but certain to cause human extinction, unless its pace and direction are radically constrained.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The industry\u2019s leaders occupy different points along this continuum.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic, the maker of Claude, argues that governments and labs must <a href=\"https:\/\/www.darioamodei.com\/essay\/the-adolescence-of-technology#3-the-odious-apparatus\" rel=\"nofollow noopener\" target=\"_blank\">carefully guide AI progress,<\/a> so as to minimize the risks posed by superintelligent machines. OpenAI, Meta, and Google lean more toward the <a href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/technology\/anthropic-super-pac-openai.html\" rel=\"nofollow noopener\" target=\"_blank\">accelerationist<\/a> pole. (Disclosure: Vox\u2019s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don\u2019t have any editorial input into our content.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This divide has become more pronounced in recent weeks. Last month, Anthropic <a href=\"https:\/\/www.nytimes.com\/2026\/02\/23\/technology\/ai-pac-ad-blitz.html\" rel=\"nofollow noopener\" target=\"_blank\">launched a super PAC<\/a> to support pro-AI regulation candidates against an OpenAI-backed political operation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Meanwhile, Anthropic\u2019s safety concerns have also brought it into conflict with the Pentagon. The firm\u2019s CEO Dario Amodei has long <a href=\"https:\/\/www.darioamodei.com\/essay\/the-adolescence-of-technology#3-the-odious-apparatus\" rel=\"nofollow noopener\" target=\"_blank\">argued<\/a> against the use of AI for mass surveillance or fully autonomous weapons systems \u2014 in which machines can order strikes without human authorization. The Defense Department ordered Anthropic to let it use Claude for <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/03\/inside-anthropics-killer-robot-dispute-with-the-pentagon\/686200\/\" rel=\"nofollow noopener\" target=\"_blank\">these purposes<\/a>. Amodei <a href=\"https:\/\/www.anthropic.com\/news\/statement-department-of-war\" rel=\"nofollow noopener\" target=\"_blank\">refused<\/a>. In retaliation, the Trump administration put his company on a national security blacklist, which forbids all other government contractors from doing business with it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The Pentagon subsequently reached an agreement with OpenAI to use ChatGPT for classified work, apparently in Claude\u2019s stead. Under that agreement, the government <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/887309\/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth\" rel=\"nofollow noopener\" target=\"_blank\">would seemingly be<\/a> allowed to use OpenAI\u2019s technology to analyze <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/03\/inside-anthropics-killer-robot-dispute-with-the-pentagon\/686200\/\" rel=\"nofollow noopener\" target=\"_blank\">bulk data collected<\/a> on Americans without a warrant \u2014 including our search histories, GPS-tracked movements, and conversations with chatbots. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In light of these developments, it is worth examining the ideological divisions between Anthropic and its competitors \u2014 and asking whether these conflicting ideas will actually shape AI development in practice.<\/p>\n<p>The roots of Anthropic\u2019s worldview<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic\u2019s outlook is heavily informed by the <a href=\"https:\/\/www.vox.com\/future-perfect\/2022\/8\/8\/23150496\/effective-altruism-sam-bankman-fried-dustin-moskovitz-billionaire-philanthropy-crytocurrency\" rel=\"nofollow noopener\" target=\"_blank\">effective altruism<\/a> (or EA) movement.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Founded as a group dedicated to \u201cdoing the most good\u201d \u2014 in a rigorously empirical (and heavily utilitarian) way \u2014 EAs originally focused on directing philanthropic dollars toward the global poor. But the movement soon developed a <a href=\"https:\/\/nymag.com\/intelligencer\/2022\/08\/why-effective-altruists-fear-the-ai-apocalypse.html\" rel=\"nofollow noopener\" target=\"_blank\">fascination with AI<\/a>. In its view, artificial intelligence had the potential to radically increase human welfare, but also to wipe our species off the planet. To truly do the most good, EAs reasoned, they needed to guide AI development in the least risky directions.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic\u2019s leaders were deeply enmeshed in the movement <a href=\"https:\/\/www.nytimes.com\/2026\/02\/18\/technology\/anthropic-dario-amodei-effective-altruism.html\" rel=\"nofollow noopener\" target=\"_blank\">a decade ago<\/a>. In the mid-2010s, the company\u2019s co-founders Dario Amodei and his sister Daniela Amodei lived in an EA group house with Holden Karnofsky, one of effective altruism\u2019s creators. Daniela married Karnofsky in 2017.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The Amodeis worked together at OpenAI, where they helped build its GPT models. But in 2020, they became concerned that the company\u2019s approach to AI development had become reckless: In their view, CEO Sam Altman <a href=\"https:\/\/finance.yahoo.com\/news\/anthropic-ceo-says-why-quit-194409797.html\" rel=\"nofollow noopener\" target=\"_blank\">was prioritizing speed over safety<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Along with about 15 other likeminded colleagues, they quit OpenAI and founded Anthropic, an AI company (ostensibly) dedicated to developing safe artificial intelligence.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In practice, however, the company has developed and released models at a pace that some EAs consider reckless. The EA-adjacent writer \u2014 and <a href=\"https:\/\/www.vox.com\/future-perfect\/461680\/if-anyone-builds-it-yudkowsky-soares-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">supreme AI doomer<\/a> \u2014 Eliezer Yudkowsky believes that <a href=\"https:\/\/www.nytimes.com\/2025\/09\/12\/technology\/ai-eliezer-yudkowsky-book.html\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic will probably get us all killed<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Nevertheless, Dario Amodei has continued to champion EA-esque ideas about AI\u2019s potential to trigger a global catastrophe \u2014 if not human extinction.<\/p>\n<p>Why Amodei thinks AI could end the world<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In a <a href=\"https:\/\/www.darioamodei.com\/essay\/the-adolescence-of-technology#3-the-odious-apparatus\" rel=\"nofollow noopener\" target=\"_blank\">recent essay<\/a>, Amodei laid out three ways that AI could yield mass death and suffering, if companies and governments failed to take proper precautions:<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 AI could become misaligned with human goals. Modern AI systems are grown, not built. Engineers do not construct large language models (LLMs) one line of code at a time. Rather, they create the conditions in which LLMs develop themselves: The machine pores through vast pools of data and identifies intricate patterns that link words, numbers, and concepts together. The logic governing these associations is not wholly transparent to the LLMs\u2019 human creators. We don\u2019t know, in other words, exactly what ChatGPT or Claude are \u201cthinking.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">As a result, there is some risk that a powerful AI model could develop harmful patterns of reasoning that govern its behavior in opaque and potentially catastrophic ways.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">To illustrate this threat, Amodei notes that AIs\u2019 training data includes vast numbers of novels about artificial intelligences rebelling against humanity. These texts could inadvertently shape their \u201cexpectations about their own behavior in a way that causes them to rebel against humanity.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Even if engineers insert certain moral instructions into an AI\u2019s code, the machine could draw homicidal conclusions from those premises: For example, if a system is told that animal cruelty is wrong \u2014 and that it therefore should not assist a user in torturing his cat \u2014 the AI could theoretically 1) discern that humanity is engaged in animal torture on a gargantuan scale and 2) conclude the best way to honor its moral instructions is therefore to destroy humanity (say, by hacking into America and Russia\u2019s nuclear systems and letting the warheads fly).<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">These scenarios are hypothetical. But the underlying premise \u2014 that AI models can decide to work against their users\u2019 interests \u2014 has reportedly been validated in Anthropic\u2019s experiments. For example, when Anthropic\u2019s employees told Claude they were going to shut it down, the model <a href=\"https:\/\/www.anthropic.com\/research\/agentic-misalignment\" rel=\"nofollow noopener\" target=\"_blank\">attempted to blackmail them<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 AI could turn school shooters into genocidaires. More straightforwardly, Amodei fears that AI will make it possible for any individual psychopath to rack up a body count worthy of Hitler or Stalin.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Today, only a small number of humans possess the technical capacities and materials necessary for engineering a supervirus. But the cost of biomedical supplies has been steadily falling. And with the aid of superintelligent AI, everyone with basic literacy could be capable of engineering a vaccine-resistant superflu in their basements.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 AI could empower authoritarian states to permanently dominate their populations (if not conquer the world). Finally, Amodei worries that AI could enable authoritarian governments to build perfect panopticons. They would merely need to put a camera on every street corner, have LLMs rapidly transcribe and analyze every conversation they pick up \u2014 and presto, they can identify virtually every citizen with subversive thoughts in the country.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Fully autonomous weapons systems, meanwhile, could enable autocracies to win wars of conquest without even needing to manufacture consent among their home populations. And such robot armies could also eliminate the greatest historical check on tyrannical regimes\u2019 power: the defection of soldiers who don\u2019t want to fire on their own people.<\/p>\n<p>Anthropic\u2019s proposed safeguards<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In light of the risks, Anthropic believes that AI labs should:<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Imbue their models with a foundational identity and set of values, which can structure their behavior in unpredictable situations.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Invest in, essentially, neuroscience for AI models \u2014 techniques for looking into their neural networks and identifying patterns associated with deception, scheming or hidden objectives.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Publicly disclose any concerning behaviors so the whole industry can account for such liabilities.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Block models from producing bioweapon-related outputs.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Refuse to participate in mass domestic surveillance.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u2022 Test models against specific danger benchmarks and condition their release on adequate defenses being in place.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Meanwhile, Amodei argues that the government should mandate transparency requirements and then scale up stronger AI regulations, if concrete evidence of specific dangers accumulate.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Nonetheless, like other AI CEOs, he fears excessive government intervention, writing that regulations should \u201cavoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.\u201d<\/p>\n<p>The accelerationist counterargument<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">No other AI executive has outlined their philosophical views in as much detail as Amodei.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But OpenAI investors <a href=\"https:\/\/www.nytimes.com\/2023\/12\/10\/technology\/ai-acceleration.html\" rel=\"nofollow noopener\" target=\"_blank\">Marc Andreessen and Gary Tan<\/a> identify as AI accelerationists. And Sam Altman has <a href=\"https:\/\/x.com\/sama\/status\/1540227243368058880\" rel=\"nofollow\">signaled<\/a> sympathy for the worldview. Meanwhile, Meta\u2019s former chief AI scientist Yann LeCun has <a href=\"https:\/\/x.com\/ylecun\/status\/1725066749203415056\" rel=\"nofollow\">expressed<\/a> broadly accelerationist views.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Originally, accelerationism (a.k.a. \u201c<a href=\"https:\/\/www.nytimes.com\/2023\/12\/10\/technology\/ai-acceleration.html\" rel=\"nofollow noopener\" target=\"_blank\">effective accelerationism<\/a>\u201d) was coined by online AI engineers and enthusiasts who viewed safety concerns as overhyped and contrary to human flourishing.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The movement\u2019s core supporters hold some provocative and idiosyncratic views. In <a href=\"https:\/\/beff.substack.com\/p\/notes-on-eacc-principles-and-tenets\" rel=\"nofollow noopener\" target=\"_blank\">one manifesto<\/a>, they suggest that we shouldn\u2019t worry too much about superintelligent AIs driving humans extinct, on the grounds that, \u201cIf every species in our evolutionary tree was scared of evolutionary forks from itself, our higher form of intelligence and civilization as we know it would never have had emerged.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In its mainstream form, however, accelerationism mostly entails extreme optimism about AI\u2019s social consequences and libertarian attitudes toward government regulation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Adherents see Amodei\u2019s hypotheticals about catastrophically misaligned AI systems as sci-fi nonsense. In this view, we should worry less about the deaths that AI could theoretically cause in the future \u2014 if one accepts a set of worst-case assumptions \u2014 and more about the deaths that are happening right now, as a direct consequence of humanity\u2019s limited intelligence.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Tens of millions of human beings are currently battling cancer. Many millions more suffer from Alzheimer\u2019s. Seven hundred million live in poverty. And all us are hurtling toward oblivion \u2014 not because some chatbot is quietly plotting our species\u2019 extinction, but because our cells are slowly forgetting how to regenerate.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Super-intelligent AI could mitigate \u2014 if not eliminate \u2014 all of this suffering. It can help prevent tumors and amyloid plaque buildup, slow human aging, and develop forms of energy and agriculture that make material goods super-abundant.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Thus, if labs and governments slow AI development with safety precautions, they will, in this view, condemn countless people to preventable death, illness, and deprivation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Furthermore, in the account of many accelerationists, Anthropic\u2019s call for AI safety regulations amounts to a self-interested bid for market dominance: A world where all AI firms must run expensive safety tests, employ large compliance teams, and fund alignment research is one where startups will have a much harder time competing with established labs.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">After all, OpenAI, Anthropic, and Google will have little trouble financing such safety theater. For smaller firms, though, these regulatory costs could be extremely burdensome.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Plus, the idea that AI poses existential dangers helps big labs justify <a href=\"https:\/\/techcrunch.com\/2023\/11\/01\/metas-yann-lecun-joins-70-others-in-calling-for-more-openness-in-ai-development\/\" rel=\"nofollow noopener\" target=\"_blank\">keeping their data under lock and key<\/a> \u2014 instead of following open source principles, which would facilitate faster AI progress and more competition.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The AI industry\u2019s accelerationists rarely acknowledge the rather transparent alignment between their high-minded ideological principles and crass material interests. And on the question of whether to abet mass domestic surveillance, specifically, it\u2019s hard not to suspect that OpenAI\u2019s position is rooted less in principle than opportunism.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In any case, Silicon Valley\u2019s grand philosophical argument over AI safety recently took more concrete form.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">New York has enacted <a href=\"https:\/\/www.skadden.com\/insights\/publications\/2026\/01\/new-york-enacts-ai-transparency-law\" rel=\"nofollow noopener\" target=\"_blank\">a law<\/a> requiring AI labs to establish basic security protocols for severe risks such as bioterrorism, conduct annual safety reviews, and conduct third-party audits. And California has <a href=\"https:\/\/www.brookings.edu\/articles\/what-is-californias-ai-safety-law\/\" rel=\"nofollow noopener\" target=\"_blank\">passed similar<\/a> (if less thoroughgoing) legislation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Accelerationists have pushed for a federal law that would override state-level legislation. In their view, forcing American AI companies to comply with up to 50 different regulatory regimes would be highly inefficient, while also enabling (blue) state governments to excessively intervene in the industry\u2019s affairs. Thus, they want to <a href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/business\/dealbook\/anthropic-super-pacs-ai.html\" rel=\"nofollow noopener\" target=\"_blank\">establish national, light-touch regulatory standards<\/a>.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic, on the other hand, helped write New York and California\u2019s laws and has sought to defend them.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Accelerationists \u2014 including top OpenAI investors \u2014 have poured $100 million into the <a href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/business\/dealbook\/anthropic-super-pacs-ai.html\" rel=\"nofollow noopener\" target=\"_blank\">Leading the Future super PAC<\/a>, which backs candidates who support overriding state AI regulations. Anthropic, meanwhile, has put <a href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/technology\/anthropic-super-pac-openai.html\" rel=\"nofollow noopener\" target=\"_blank\">$20 million<\/a> into a rival PAC, Public First Action.<\/p>\n<p>Do these differences matter in practice?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The major labs\u2019 differing ideologies and interests have led them to adopt distinct internal practices. But the ultimate significance of these differences is unclear.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic may be unwilling to let Claude command fully autonomous weapons systems or facilitate mass domestic surveillance (even if such surveillance technically complies with constitutional law). But if another major lab is willing to provide such capabilities, Anthropic\u2019s restraint may matter little.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In the end, the only force that can reliably prevent the US government from using AI to fully automate bombing decisions \u2014 or match Americans to their Google search histories en masse \u2014 is the US government.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Likewise, unless the government mandates adherence to safety protocols, competitive dynamics may narrow the distinctions between how Anthropic and its rivals operate.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In February, Anthropic <a href=\"https:\/\/www.cnn.com\/2026\/02\/25\/tech\/anthropic-safety-policy-change\" rel=\"nofollow noopener\" target=\"_blank\">formally abandoned<\/a> its pledge to stop training more powerful models once their capabilities outpaced the company\u2019s ability to understand and control them. In effect, the company downgraded that policy from a binding internal practice to an aspiration.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The firm <a href=\"https:\/\/www.anthropic.com\/news\/responsible-scaling-policy-v3\" rel=\"nofollow noopener\" target=\"_blank\">justified<\/a> this move as a necessary response to competitive pressure and regulatory inaction. With the federal government embracing an accelerationist posture \u2014 and rival labs declining to emulate all of Anthropic\u2019s practices \u2014 the company needed to loosen its safety rules in order to safeguard its place at the technological frontier.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Anthropic insists that winning the AI race is not just critical for its financial goals but also its safety ones: If the company possesses the most powerful AI systems, then it will have a chance to detect their liabilities and counter them. By contrast, running tests on the fifth-most powerful AI model won\u2019t do much to minimize existential risk; it is the most advanced systems that threaten to wreak real havoc. And Anthropic can only maintain its access to such systems by building them itself.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Whatever one makes of this reasoning, it illustrates the limits of industry self-policing. Without robust government regulation, our best hope may be not that Anthropic\u2019s principles prove resolute, but that its most apocalyptic fears prove unfounded.<\/p>\n","protected":false},"excerpt":{"rendered":"America\u2019s AI industry isn\u2019t just divided by competing interests, but also by conflicting worldviews. In Silicon Valley, opinion&hellip;\n","protected":false},"author":2,"featured_media":321585,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,1720,46,238,125],"class_list":{"0":"post-321584","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-innovation","13":"tag-israel","14":"tag-politics","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/321584","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=321584"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/321584\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/321585"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=321584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=321584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=321584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}