{"id":524901,"date":"2026-03-09T16:48:07","date_gmt":"2026-03-09T16:48:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/524901\/"},"modified":"2026-03-09T16:48:07","modified_gmt":"2026-03-09T16:48:07","slug":"how-ai-firm-anthropic-wound-up-in-the-pentagons-crosshairs-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/524901\/","title":{"rendered":"How AI firm Anthropic wound up in the Pentagon\u2019s crosshairs | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Until recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman\u2019s OpenAI or Elon Musk\u2019s xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.<\/p>\n<p class=\"dcr-130mj7b\">That perception has shifted as Anthropic has become the central actor in a high-profile fight with the Department of Defense over the company\u2019s refusal to allow Claude to be used for domestic mass surveillance and autonomous weapons systems that can kill people without human input. Amid tense negotiations, the AI firm rejected a Pentagon deadline for a deal last week, in a move that led Pete Hegseth, the defense secretary, to <a href=\"https:\/\/x.com\/SecWar\/status\/2027507717469049070?s=20\" data-link-name=\"in body link\" rel=\"nofollow\">accuse<\/a> Anthropic of \u201carrogance and betrayal\u201d of its home country while demanding that any companies that work with the US government cease all business with the AI firm.<\/p>\n<p class=\"dcr-130mj7b\">The week since has brought more chaos. OpenAI announced it had struck its own deal with the DoD, resulting in employee pushback and Amodei accusing rival CEO Sam Altman of giving \u201cdictator-style praise\u201d to Donald Trump, for which Amodei later apologized. Trump meanwhile denounced Anthropic in <a href=\"https:\/\/www.politico.com\/news\/2026\/03\/05\/bitterly-ironic-trump-is-wrecking-his-ai-agenda-with-anthropic-spat-lobbyists-and-ex-officials-say-00814448\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">an interview with Politico<\/a>, saying he \u201cfired them like dogs\u201d. On Thursday, the DoD <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/05\/trump-anthropic-ai-pentagon\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">formally declared Anthropic<\/a> a supply-chain risk and demanded other businesses cut ties \u2013 the first time an American company has ever been targeted with the designation \u2013 which poses grave financial consequences for the company if fully enacted.<\/p>\n<p class=\"dcr-130mj7b\">The feud has intensified an unsettled debate over how AI will be used in warfare and who will be accountable for the result, while also representing one of the most dramatic disagreements so far between the tech industry and the Trump administration. As the military rapidly adopts the technology for its operations, including in the war with Iran, it has turned previously hypothetical situations into real-world ethical tests for AI companies.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic\u2019s standoff with the DoD is also the culmination of what researchers see as some of the AI firm\u2019s inherent contradictions. It is a company founded on the premise of creating a safe future for AI, which has nevertheless struck major partnerships for classified work with the Pentagon and <a href=\"https:\/\/www.theguardian.com\/us-news\/ng-interactive\/2025\/sep\/22\/ice-palantir-data\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">surveillance tech giant<\/a> Palantir. Its leadership says it is deeply worried about the existential risks of AI, though they recently <a href=\"https:\/\/time.com\/7380854\/exclusive-anthropic-drops-flagship-safety-pledge\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">dropped a founding safety pledge<\/a>, citing the speed of industry competition. It has pledged transparency, but like other AI companies has developed its models through a rapacious demand for proprietary data, with <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/01\/27\/anthropic-ai-scan-destroy-books\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">court records <\/a>documenting how it led a secretive effort to scan and destroy millions of physical books to train Claude.<\/p>\n<p class=\"dcr-130mj7b\">Yet recent weeks have shown that there are some red lines which it appears Anthropic will not cross, a rarity within a tech industry that has largely <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/20\/trump-tech-alliance-datacenters-social-media\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">made itself subservient<\/a> to the Trump administration and to a fear of falling behind industry rivals. The fallout from its resistance to the Pentagon\u2019s demands has so far been a public relations victory for Anthropic, with <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/02\/claude-anthropic-ai-pentagon\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Claude surging in popularity<\/a> after the deal fell apart and OpenAI left bandaging its reputation.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic did not respond to a request for comment on a set of questions related to this article.<\/p>\n<p class=\"dcr-130mj7b\">The longer term implications for Anthropic are less clear, with some defense contractors as well as the US state and treasury departments <a href=\"https:\/\/www.reuters.com\/sustainability\/society-equity\/defense-contractors-like-lockheed-seen-removing-anthropics-ai-after-trump-ban-2026-03-04\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">already stepping away<\/a> from using its AI models and the Trump administration intent on punishing Anthropic for its dissent. Anthropic has said that it will challenge its supply chain risk designation in court, while Amodei has also <a href=\"https:\/\/www.ft.com\/content\/97bda2ef-fc06-40b3-a867-f61a711b148b\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">reportedly reopened negotiations<\/a> with the DoD in recent days to try to come to a resolution.<\/p>\n<p>The \u2018safety-first\u2019 AI company<\/p>\n<p class=\"dcr-130mj7b\">Before he was sparring with Sam Altman and the Pentagon, Dario Amodei was one of OpenAI\u2019s leading researchers. Amodei joined Altman\u2019s firm in 2016 after a stint at Google, taking on a prominent role in developing OpenAI\u2019sGPT models and eventually becoming vice-president of research. His younger sister Daniela, meanwhile, served as vice-president of safety and policy, helping oversee the ethical development of OpenAI\u2019s models.<\/p>\n<p class=\"dcr-130mj7b\">As OpenAI rapidly advanced its technology and Altman divisively consolidated his authority over the company, however, the Amodeis broke away in 2021, prior to the release of ChatGPT, to found Anthropic \u2013 taking several other OpenAI employees along with them. They branded Anthropic as an \u201cAI safety and research company\u201d, and central to their new firm was a vow to build safer AI systems that would follow <a href=\"https:\/\/www.anthropic.com\/constitution\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">detailed sets of principles<\/a> they describe as a constitution.<\/p>\n<p>Dario Amodei, Anthropic co-founder and CEO. Photograph: Chris Ratcliffe\/Bloomberg via Getty Images<\/p>\n<p class=\"dcr-130mj7b\">In 2024, Amodei published a lengthy essay titled \u201cMachines of Loving Grace\u201d that outlined some of his utopian vision for the future of AI. He argued that AI could eliminate most cancers, prevent nearly all forms of infectious disease and reduce economic inequality. He also presented vague ideas for how AI would integrate into everything from decision-making in the justice system to how the government could provide services such as health benefits. On democracy, however, Amodei was more skeptical.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI see no strong reason to believe AI will preferentially or structurally advance democracy and peace,\u201d he wrote.<\/p>\n<p class=\"dcr-130mj7b\">Amodei, who received a doctorate in biophysics at Princeton University before becoming enthralled with the potential of artificial intelligence, had for years been concerned about the existential risks of developing AI and seen parallels to the creation of nuclear weapons. One of <a href=\"https:\/\/www.nytimes.com\/2023\/07\/21\/podcasts\/dario-amodei-ceo-of-anthropic-on-the-paradoxes-of-ai-safety-and-netflixs-deep-fake-love.html?showTranscript=1\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">his favorite books<\/a> is The Making of the Atomic Bomb by Richard Rhodes, a nearly 900-page Pulitzer-winning account of how nuclear scientists ushered in a new and dangerous world through the technology they created.<\/p>\n<p class=\"dcr-130mj7b\">While a mix of discomfort and pride about becoming the new Robert Oppenheimer is common among CEOs of AI companies, part of the Amodeis\u2019 focus on existential risk has ties with a utilitarian movement known as \u201ceffective altruism\u201d, which became popular in Silicon Valley throughout the 2010s and advocated for projects that would maximize global good. The movement, which has since fallen out of vogue after a series of scandals such as its close association with the disgraced crypto billionaire Sam Bankman-Fried, also featured a subset of people concerned with AI safety \u2013 the idea that one of the biggest global threats is the development of AI that could turn against humanity.<\/p>\n<p class=\"dcr-130mj7b\">Although <a href=\"https:\/\/www.wired.com\/story\/anthropic-benevolent-artificial-intelligence\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">the Amodeis have denied being adherents<\/a> of effective altruism, many of the company\u2019s core principles echo its language, such as vows to \u201cmaximize positive outcomes for humanity in the long run\u201d. Some of Anthropic\u2019s earliest investors, such as Facebook co-founder Dustin Moskovitz, also had connections to the effective altruism movement. Daniela Amodei\u2019s husband, Holden Karnofsky, meanwhile co-founded and for years was CEO of one of the largest effective-altruism based philanthropic funding organizations, Open Philanthropy. When Hegseth declared Anthropic a supply-chain risk this past week, he also criticized Anthropic as being \u201ccloaked in the sanctimonious rhetoric of \u2018effective altruism\u2019\u201d.<\/p>\n<p class=\"dcr-130mj7b\">The AI safety movement has its critics outside the Pentagon as well, including researchers who believe that concerns about existential threats from artificial intelligence are often a distraction from the more tangible, mundane harms and biases of AI.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThey would talk about these existential risks and the misappropriation of AI for bioterrorism. I always thought that those were either too distant or too out of reach,\u201d said Sarah Kreps, director of the Tech Policy Institute at Cornell University. \u201cThat it didn\u2019t quite fully understand risk.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The differences between the concerns of the capital S \u201cAI Safety\u201d movement versus the broader field of safety and ethics in AI is a long-running schism within the industry. It also offers an explanation for some of the dissonance about how Anthropic could be so worried about developing AI to benefit humanity while at the same time allowing its models to be used by intelligence and defense agencies for lethal purposes.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThere seems to be a little bit of a misunderstanding in the discourse \u2013 that because Anthropic have clearly put themselves out as accountable, then they are against the use of their systems in warfare,\u201d said Margaret Mitchell, an AI ethics researcher and chief ethics scientist at the tech company Hugging Face. \u201cBut that\u2019s not true.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt\u2019s not that they don\u2019t want to kill people. It\u2019s that they want to make sure to kill the right people,\u201d Mitchell said. \u201cAnd who the right people are is decided by the government.\u201d<\/p>\n<p>From safety-first to targeted missile strikes<\/p>\n<p class=\"dcr-130mj7b\">While Anthropic vowed to build a safer AI, it pursued a different sector of the AI market than its rivals. If OpenAI\u2019s ChatGPT is presented as a consumer-forward chatbot that many people treat like a search engine or AI companion, Anthropic has geared Claude more toward enterprise software solutions and integration into the organizational infrastructure of workplaces. The distinction, though boring on its face, has made Claude the preferred choice at many organizations and helped make it the first model permitted for classified use in military systems.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic\u2019s integration into the military began with a 2024 deal with Palantir to allow Claude to be used within its systems, which already operated in classified environments. The two companies touted the agreement as a way to drastically reduce the resources and time needed for military operations and intelligence gathering. The following year, Anthropic, along with several other major AI companies, struck a $200m deal with the DoD to use their AI tools for military operations.<\/p>\n<p class=\"dcr-130mj7b\">What has since become apparent is that these deals did not include permanent agreements on how the government could use Anthropic\u2019s AI or what safety guardrails would be fixed on its models. With the military\u2019s indirect access via Palantir\u2019s system, Anthropic had less direct control over its technology\u2019s use than it would with Claude\u2019s website. That discrepancy came to a head in recent months as the government requested that Anthropic loosen its safety restrictions to allow a wider range of use, kicking off the current dispute between the company and the Pentagon.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic\u2019s hiring in recent years of former Biden staffers, Amodei\u2019s political opposition to Trump and Hegseth\u2019s desire to eradicate \u201cwokeness\u201d from the military have all added a political dimension to the standoff. The Pentagon\u2019s chief technical officer Emil Michael also appears to <a href=\"https:\/\/www.ft.com\/content\/97bda2ef-fc06-40b3-a867-f61a711b148b\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">hold a personal distaste<\/a> for Amodei, publicly accusing him of being a \u201cliar\u201d and having a \u201cGod-complex\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Giving a sense of urgency to the negotiations is the US military\u2019s use of Claude for a wide range of operations, including its mission to capture Venezuelan leader Nicol\u00e1s Maduro and in its war with Iran. <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/03\/04\/anthropic-ai-iran-campaign\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">The Washington Post reported<\/a> that the military is using Palantir\u2019s Maven smart system, which has Claude embedded into it, to determine which sites in Iran to bomb and provide analysis on its strikes.<\/p>\n<p class=\"dcr-130mj7b\">While the dispute Anthropic has run into with the Pentagon has elements unique to AI, it is also emblematic of problems around dual-use technologies, according to experts, meaning products that have both civilian and military applications. A technology that is developed for a broad consumer base and then adapted for use in classified military systems is bound to hit fault lines, since the technology is not tailor-made for specific use cases or built with parameters specifically for military use. Companies can find that their product is being repurposed in ways they may ethically oppose, but have little ability to prevent.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe same technology that underlies finding a bird in a picture underlies finding a civilian fleeing from their home,\u201d Mitchell gave as an example. \u201cThat\u2019s the same type of model, just very slightly different fine tuning.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Another issue is that tech companies do not have a perfect window into how their technologies will be used in classified systems, while at the same time the military does not have knowledge of exactly how proprietary technologies like Anthropic\u2019s Claude actually work \u2013 an issue which law professor Ashley Deeks has called the \u201cdouble black box\u201d. Even contracts on agreed-upon use can be fuzzy, especially given the Trump administration\u2019s distaste for legal oversight.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThere is an expectation, generally, that parties to a contract are supposed to comply with the contract.\u201d said Deeks, a professor at the University of Virginia Law School. \u201cBut, of course, contracts need to be interpreted and the military might interpret a phrase one way where the company intended it to mean something else.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Hanging over the feud is also the broader question of who should decide what AI is used for and a lack of detailed regulation from Congress on autonomous weapons systems. Although neither Anthropic nor the Pentagon believe that a private company should have decision-making power over AI\u2019s military applications, right now the company is functioning as one of the only checks on what appears to be the military\u2019s expansive desires for weaponizing AI.<\/p>\n<p class=\"dcr-130mj7b\">\u201cDo we want the DoD to be using AI for autonomous weapon systems, and if so, in what settings, with what restrictions, at what level of confidence, what level of risk are we willing to take on?\u201d Deeks said. \u201cIt\u2019s hard for us to have a sense out in the public about how the DoD is thinking about all this.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Until recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at&hellip;\n","protected":false},"author":2,"featured_media":524902,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-524901","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/524901","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=524901"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/524901\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/524902"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=524901"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=524901"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=524901"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}