{"id":537872,"date":"2026-03-13T13:44:07","date_gmt":"2026-03-13T13:44:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/537872\/"},"modified":"2026-03-13T13:44:07","modified_gmt":"2026-03-13T13:44:07","slug":"anthropic-pentagon-battle-shows-how-big-tech-has-reversed-course-on-ai-and-war-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/537872\/","title":{"rendered":"Anthropic-Pentagon battle shows how big tech has reversed course on AI and war | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war \u2013 and what lines it will not cross. Amid Silicon Valley\u2019s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech\u2019s answer is looking very different than it did even less than a decade ago.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic\u2019s feud with the <a href=\"https:\/\/www.theguardian.com\/us-news\/trump-administration\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Trump administration<\/a> escalated three days ago as the AI firm sued the Department of Defense, claiming that the government\u2019s decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic has argued that giving in to the DoD\u2019s demands to permit \u201cany lawful use\u201d of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross.<\/p>\n<p class=\"dcr-130mj7b\">Although Anthropic\u2019s refusal to remove safety guardrails and the Pentagon\u2019s subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech\u2019s ties to the military.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIf people are looking for good guys and bad guys, where a good guy is someone who doesn\u2019t support war,\u201d said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. \u201cThen they\u2019re not going to find that here.\u201d<\/p>\n<p>Anti-military protests to military contracts<\/p>\n<p class=\"dcr-130mj7b\">There\u2019s a number of contributing factors in big tech\u2019s newfound embrace of militarism. Its alignment with the Trump administration, which has included <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/20\/trump-tech-alliance-datacenters-social-media\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">shows of fealty to Trump<\/a> from major CEOs, has tied tech firms to the government\u2019s desire to expand its military capabilities. The administration\u2019s vow to overhaul federal agencies using artificial intelligence has also specifically signaled an opportunity for AI firms to integrate their products into government and military operations in a way that could secure revenue for years to come. Looming in the background, concern over China\u2019s technological advancement and a surge in international defense spending have also shifted attitudes in the industry.<\/p>\n<p class=\"dcr-130mj7b\">It was not so long ago, however, that working with the military on potentially harmful technology was seen as a red line for many big tech workers. In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe believe that Google should not be in the business of war,\u201d over 3,000 workers stated in an open letter at the time. Google decided not to renew Project Maven following the protests and <a href=\"https:\/\/www.armscontrol.org\/act\/2018-07\/news\/google-renounces-ai-work-weapons\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">published policies<\/a> that barred pursuing technology that could \u201ccause or directly facilitate injury to people\u201d.<\/p>\n<p class=\"dcr-130mj7b\">In the years since the Project Maven protest, though, Google has <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/apr\/27\/google-project-nimbus-israel\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">clamped down on employee activism<\/a>, <a href=\"https:\/\/www.wired.com\/story\/google-responsible-ai-principles\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">removed the 2018 language from its policies<\/a> that prohibited creating technology for weaponry and signed numerous contracts that allow militaries to use its products. In 2024, the tech giant fired over 50 employees in response to protests against the company\u2019s <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/apr\/27\/google-project-nimbus-israel\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">military ties to the Israeli government<\/a>. Chief executive Sundar Pichai <a href=\"https:\/\/blog.google\/company-news\/inside-google\/company-announcements\/building-ai-future-april-2024\/#:~:text=One%20final%20note:%20All%20of,collaborate,%20discuss%20and%20even%20disagree.\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">sent a memo<\/a> to employees after the firings stating that Google was a business and not a place to \u201cfight over disruptive issues or debate politics\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Google announced just this week that it would <a href=\"https:\/\/cloud.google.com\/blog\/topics\/public-sector\/gemini-for-government-build-custom-ai-agents-for-unclassified-work-on-genaimil\/?utm_campaign=%5BREBRAND%5D+%5BTI-AM%5D+Th&amp;utm_content=1095&amp;utm_medium=email&amp;utm_source=cio&amp;utm_term=124\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">provide its Gemini artificial intelligence<\/a> to provide the military a platform for creating AI agents to work on unclassified projects.<\/p>\n<p class=\"dcr-130mj7b\">OpenAI too had a blanket ban on allowing any militaries to access its models prior to 2024, but since and now has its chief product officer <a href=\"https:\/\/defensescoop.com\/2025\/06\/13\/army-detachment-201-executive-innovation-corps-meta-openai-palantir\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">serving as a lieutenant colonel<\/a> in the US military\u2019s \u201cexecutive innovation corps\u201d. The startup, along with Google, Anthropic and xAI, signed an up-to-$200m contract with the DoD last year to integrate its technology into military systems. On the day that Pete Hegseth, the defense secretary, declared Anthropic a supply chain risk, OpenAI <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/03\/openai-pentagon-ceo-sam-altman-chatgpt\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">secured a deal with the DoD<\/a> allowing its tech to be used in classified military systems.<\/p>\n<p class=\"dcr-130mj7b\">Elsewhere in the tech industry, more hawkish companies like defense tech firm Anduril, founded the year before the Google Maven protests, and surveillance tech maker Palantir have made partnering with the DoD a cornerstone of their businesses and attempted to sway Silicon Valley politics towards their worldview. Palantir has been ahead of the curve on working with the military, contracting with military intelligence to map planted explosives in Afghanistan in the early 2010s. Chief executive Alex Karp published a book last year dedicated in large part to advocating for closer integration of the tech industry and AI with the <a href=\"https:\/\/www.theguardian.com\/us-news\/us-military\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">US military<\/a>, in one passage accusing the Google workers who protested in 2018 of being nihilists.<\/p>\n<p class=\"dcr-130mj7b\">After Google dropped the Project Maven contract in 2019, Palantir took it over. Maven is now the name of the classified system that military personnel use to access Anthropic\u2019s Claude, according to the <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/03\/04\/anthropic-ai-iran-campaign\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Washington Post<\/a>.<\/p>\n<p>Anthropic goes to war<\/p>\n<p class=\"dcr-130mj7b\">Even as Anthropic has received public praise in its standoff with the Pentagon, its co-founder and chief exceutive Dario Amodei has emphasized that the AI company and the government largely want the same things.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAnthropic has much more in common with the Department of War than we have differences,\u201d Amodei wrote in a blogpost last Thursday.<\/p>\n<p class=\"dcr-130mj7b\">While the White House has accused Anthropic of being \u201ca radical left, woke company\u201d, Amodei\u2019s views on the use of AI in conflict and fears of its misuse are far from tree-hugging pacifism. In a lengthy essay published in January, Amodei warned against potential harms of AI such as the creation of deadly bioweapons and threats from China maliciously using the technology. Simultaneously, he argued that companies should arm democratic governments and militaries with the most advanced AI possible to combat autocratic adversaries.<\/p>\n<p class=\"dcr-130mj7b\">He expressed less concern about AI making it easier to kill people or conduct warfare and more about the reliability of the technology and threat of it being consolidated by too small a number of people with \u201cfingers on the button\u201d who could control an autonomous drone army.<\/p>\n<p class=\"dcr-130mj7b\">Amodei\u2019s essay also foreshadowed some of the central issues involved in his fight with the Pentagon, including the potential for AI as a tool of mass surveillance. While arguing for bulwarks against the abuse of AI, he stated that his formulation was that it was okay to use the technology for national defense \u201cin all ways except those which would make us more like our autocratic adversaries\u201d.<\/p>\n<p class=\"dcr-130mj7b\">While Amodei has so far stuck to the company\u2019s red lines, he has also repeatedly stated that he wants Anthropic to continue working with the Defense Department. The company\u2019s lawsuit against the DoD showcases how extensively the company has been willing to work with the military and alter its products for their use.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAnthropic does not impose the same restrictions on the military\u2019s use of Claude as it does on civilian customers,\u201d Anthropic\u2019s California lawsuit stated. \u201cClaude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The government has reportedly been using Claude for target selection and analysis in its bombing campaign against Iran, a use-case that Anthropic has given no indication that it has an issue with. In his blog post on Anthropic\u2019s website last week, Amodei stated that he did not believe that his company had any role in the military\u2019s operational decision-making. He claimed that Anthropic supports American frontline warfighters and remains committed to providing them with technology.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe have said to the department of war that we are OK with all use cases,\u201d Amodei told CBS News last week. \u201cBasically 98 or 99% of the use cases they want to do, except for two.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the&hellip;\n","protected":false},"author":2,"featured_media":521750,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-537872","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/537872","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=537872"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/537872\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/521750"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=537872"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=537872"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=537872"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}