{"id":502504,"date":"2026-03-04T04:16:10","date_gmt":"2026-03-04T04:16:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/502504\/"},"modified":"2026-03-04T04:16:10","modified_gmt":"2026-03-04T04:16:10","slug":"anthropics-moral-stand-against-pentagon-raises-questions-about-ais-readiness-for-military-use","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/502504\/","title":{"rendered":"Anthropic&#8217;s moral stand against Pentagon raises questions about AI&#8217;s readiness for military use"},"content":{"rendered":"<p>Anthropic\u2019s <a class=\"Link AnClick-LinkEnhancement\" data-gtm-enhancement-style=\"LinkEnhancementA\" href=\"https:\/\/apnews.com\/article\/anthropic-pentagon-ai-dario-amodei-hegseth-0c464a054359b9fdc80cf18b0d4f690c\" rel=\"nofollow noopener\" target=\"_blank\">moral stand<\/a> on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren\u2019t capable enough for acts of war.<\/p>\n<p>Anthropic\u2019s chatbot Claude, for the first time, outpaced rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon, according to market research firm Sensor Tower. <\/p>\n<p>The Trump administration on Friday ordered government agencies to <a class=\"Link AnClick-LinkEnhancement\" data-gtm-enhancement-style=\"LinkEnhancementA\" href=\"https:\/\/apnews.com\/article\/anthropic-pentagon-ai-hegseth-dario-amodei-b72d1894bc842d9acf026df3867bee8a\" rel=\"nofollow noopener\" target=\"_blank\">stop using Claude<\/a> and designated it a supply chain risk after Anthropic CEO Dario Amodei refused to bend his company\u2019s ethical safeguards preventing the technology from being applied to autonomous weapons and domestic mass surveillance. Anthropic has said it will challenge the Pentagon in court once it receives formal notice of the penalties.<\/p>\n<p>And while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply the technology to high-stakes tasks.<\/p>\n<p>\u201cHe caused this mess,\u201d said Missy Cummings, a former Navy fighter pilot who now directs the robotics and automation center at George Mason University. \u201cThey were the No. 1 company to push ridiculous hype over the capabilities of these technologies. And now, all of a sudden, they want to be for real. They want to tell people, \u2018Oh, wait a minute. We really shouldn\u2019t be using these technologies in weapons.\u2019\u201d<\/p>\n<p>Anthropic didn\u2019t immediately respond to a request for comment. The Defense Department declined to comment on whether it is still using Claude, including in the Iran war, citing operational security.<\/p>\n<p>Cummings published a paper at a top AI conference in December arguing that government agencies should prohibit the use of generative AI \u201cto control, direct, guide or govern any weapon.\u201d Not because AI is so smart that it could go rogue, but because the large language models behind chatbots like Claude make too many mistakes \u2014 called hallucinations or confabulations \u2014 and are \u201cinherently unreliable and not appropriate in environments that could result in the loss of life.\u201d<\/p>\n<p>\u201cYou\u2019re going to kill noncombatants,\u201d Cummings said in an interview Tuesday with The Associated Press. \u201cYou\u2019re going to kill your own troops. I\u2019m not clear whether the military truly understands the limitations.\u201d<\/p>\n<p>Amodei sought to emphasize those limitations in defending Anthropic\u2019s ethical stance last week, arguing that \u201cfrontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America\u2019s warfighters and civilians at risk.\u201d <\/p>\n<p>Anthropic, until recently, was the only one of its peers to have approval for use in classified military systems, where it has partnered with data analysis company Palantir and other defense contractors. President Donald Trump said Friday, around the same time he was approving Saturday\u2019s military strikes on Iran, that the Pentagon would have six months to phase out Anthropic\u2019s military applications.<\/p>\n<p>Cummings, a former Palantir adviser, said it\u2019s possible that Claude has already been used in military strike planning.<\/p>\n<p>\u201cI just fundamentally hope that there were humans in the loop,\u201d she said. \u201cA human has to babysit these technologies very closely. You can use them to do these things, but you need to verify, verify, verify.\u201d<\/p>\n<p>She said that\u2019s a contrast to the messaging from AI companies that have suggested that their technology is evolving to the point where it is \u201calmost sentient.\u201d <\/p>\n<p>\u201cIf there\u2019s culpability here, I\u2019d say half is Anthropic\u2019s for driving the hype and half is the Department of War\u2019s fault for firing all the people that would have otherwise advised them against stupid uses of technology,\u201d Cummings said. <\/p>\n<p>One social media commentator this week described Anthropic\u2019s government problems as a \u201cHype Tax\u201d \u2014 a message that was reposted by President Donald Trump\u2019s top AI adviser, David Sacks, a frequent critic of the company. <\/p>\n<p>And while it has caused legal hassles that could jeopardize Anthropic\u2019s business partnerships with other military contractors, it has also bolstered its reputation as a safety-minded AI developer.<\/p>\n<p>\u201cIt\u2019s applaudable that a company stood up to the government in order to maintain what it felt were its ethics and were its business choices, even in the face of these potentially crippling policy responses,\u201d said Jennifer Huddleston, a senior fellow at the libertarian-leaning Cato Institute.<\/p>\n<p>Consumers have already spoken, leading to a surge of Claude downloads that made it the most popular iPhone app starting on Saturday and for all phone systems in the U.S. on Monday, according to Sensor Tower. That\u2019s come at the expense of OpenAI\u2019s ChatGPT, which saw its consumer reputation damaged when it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments. <\/p>\n<p>In the Apple store, the number of 1-star reviews \u2014 the worst rating \u2014 of ChatGPT grew by 775% on Saturday and continued to grow early this week, reflecting a backlash that forced OpenAI to do damage control.<\/p>\n<p>\u201cWe shouldn\u2019t have rushed to get this out on Friday,\u201d OpenAI CEO Sam Altman said in a social media post Monday. \u201cThe issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.\u201d<\/p>\n<p>Altman gathered employees for an \u201call-hands\u201d meeting on Tuesday to discuss next steps.<\/p>\n<p>\u201cThere are many things the technology just isn\u2019t ready for, and many areas we don\u2019t yet understand the tradeoffs required for safety,\u201d Altman said on X. \u201cWe will work through these, slowly, with the (Pentagon), with technical safeguards and other methods.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Anthropic\u2019s moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies&hellip;\n","protected":false},"author":2,"featured_media":502505,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27],"tags":[15959,181,28,76529,12289,12,793,11382,801,228242,77169,1576,22997,111,7047,74,12955,795,965],"class_list":{"0":"post-502504","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"tag-anthropic-pbc","9":"tag-artificial-intelligence","10":"tag-business","11":"tag-dario-amodei","12":"tag-david-sacks","13":"tag-donald-trump","14":"tag-general-news","15":"tag-information-technology","16":"tag-iran","17":"tag-iran-war","18":"tag-jennifer-huddleston","19":"tag-military-and-defense","20":"tag-openai-inc","21":"tag-politics","22":"tag-sam-altman","23":"tag-technology","24":"tag-u-s-department-of-defense","25":"tag-u-s-news","26":"tag-world-news"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/502504","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=502504"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/502504\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/502505"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=502504"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=502504"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=502504"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}