{"id":499530,"date":"2026-02-25T23:14:10","date_gmt":"2026-02-25T23:14:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/499530\/"},"modified":"2026-02-25T23:14:10","modified_gmt":"2026-02-25T23:14:10","slug":"anthropic-ditches-its-core-safety-promise-in-the-middle-of-an-ai-red-line-fight-with-the-pentagon","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/499530\/","title":{"rendered":"Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon"},"content":{"rendered":"<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22gzc5000x27qgej93hjm0@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22yosp0005356r19pch1f3@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm23dg1o000e356rczovpenx@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            In a <a href=\"https:\/\/www.anthropic.com\/news\/responsible-scaling-policy-v3\" target=\"_blank\" rel=\"nofollow noopener\">blog post<\/a> Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22uztn0000356r8g9aug5p@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The announcement is surprising, because Anthropic has described itself as the AI company with a \u201c<a href=\"https:\/\/x.com\/AmandaAskell\/status\/1995610570859704344\" target=\"_blank\" rel=\"nofollow\">soul<\/a>.\u201d It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm235s67000c356r2ric1na8@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The policy change is separate and unrelated to Anthropic\u2019s discussions with the Pentagon, according to a source familiar with the matter. Defense Secretary Pete Hegseth <a href=\"https:\/\/www.cnn.com\/2026\/02\/24\/tech\/hegseth-anthropic-ai-military-amodei\" rel=\"nofollow noopener\" target=\"_blank\">gave Anthropic CEO Dario Amodei an ultimatum<\/a> on Tuesday to roll back the company\u2019s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct00063b6rm71k2g1w@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks \u2013 guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington\u2019s current anti-regulatory political climate.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct00073b6rxinuqysn@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Anthropic\u2019s <a href=\"https:\/\/www.anthropic.com\/news\/anthropics-responsible-scaling-policy\" target=\"_blank\" rel=\"nofollow noopener\">previous policy<\/a> stipulated that it should pause training more powerful models if their capabilities outstripped the company\u2019s ability to control them and ensure their safety \u2014 a measure that\u2019s been removed in the <a href=\"https:\/\/www-cdn.anthropic.com\/e670587677525f28df69b59e5fb4c22cc5461a17.pdf\" target=\"_blank\" rel=\"nofollow noopener\">new policy<\/a>. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could \u201cresult in a world that is less safe.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct00083b6ru8o50zwr@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            As part of the new policy, Anthropic said it will separate its own safety plans from its recommendations for the AI industry.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct00093b6rirslwag1@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Anthropic wrote that it had hoped its original safety principles \u201cwould encourage other AI companies to introduce similar policies. This is the idea of a \u2018race to the top\u2019 (the converse of a \u2018race to the bottom\u2019), in which different industry players are incentivized to improve, rather than weaken, their models\u2019 safeguards and their overall safety posture.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000a3b6rnzqt4hcc@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The company now suggests that hasn\u2019t played out.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm2elsy800003b6rijtsgcly@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            In a statement to CNN, an Anthropic spokesperson described the updated policy as \u201cthe strongest to date on the level of public accountability and transparency.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm2en9r500023b6rjkrlbul5@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cWe\u2019ve gone a significant step further from our prior policies by committing to publicly publish detailed reports at regular intervals on our plans to strengthen our risk mitigations, as well as the threat models and capabilities of all our models,\u201d the statement said. \u201cFrom the beginning, we\u2019ve said the pace of AI and uncertainties in the field would require us to rapidly iterate and improve the policy.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000b3b6ray297l4d@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Anthropic\u2019s new safety policy includes a \u201cFrontier Safety Roadmap\u201d that outlines the company\u2019s self-imposed guidelines and safeguards. But the company acknowledged the new framework is more flexible than its past policy.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000c3b6ro0vjz177@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cRather than being hard commitments, these are public goals that we will openly grade our progress towards,\u201d the company said in its blog post.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000d3b6rolq6l5tb@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The change comes a day after Defense Secretary Pete Hegseth <a href=\"https:\/\/www.cnn.com\/2026\/02\/24\/tech\/hegseth-anthropic-ai-military-amodei\" rel=\"nofollow noopener\" target=\"_blank\">gave Anthropic CEO Dario Amodei a Friday deadline<\/a> to roll back the company\u2019s AI safeguards, or risk losing a $200 million Pentagon contract and being put on what is effectively a government blacklist.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000e3b6r155cegce@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Anthropic has concerns over two issues that it isn\u2019t willing to drop, according to a source familiar with the company\u2019s meeting with Hegseth: AI-controlled weapons and mass domestic surveillance of American citizens. Anthropic believes AI is not reliable enough to operate weapons, and there are no laws or regulations yet that cover how AI could be used in mass surveillance, a source said.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000f3b6ro2x0adsf@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            AI researchers applauded Anthropic\u2019s stance on social media on Tuesday and expressed concerns about the idea of AI being used for government surveillance.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000g3b6rxqnkroyz@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            The company has long positioned itself as the AI business that prioritizes safety. Anthropic has published research showing how its own AI models <a href=\"https:\/\/www-cdn.anthropic.com\/6d8a8055020700718b0c49369f60816ba2a7c285.pdf\" target=\"_blank\" rel=\"nofollow noopener\">could be capable of blackmail<\/a> under certain conditions. The company recently <a href=\"https:\/\/www.anthropic.com\/news\/donate-public-first-action?ref=ai-360.online\" target=\"_blank\" rel=\"nofollow noopener\">donated $20 million<\/a> to Public First Action, a political group pushing for AI safeguards and education.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000h3b6rg9sx5462@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            But the company has faced increasing pressure and competition from both the government and its rivals. Hegseth, for example, plans to invoke the Defense Production Act on Anthropic and designate the company a supply chain risk if it does not comply with the Pentagon\u2019s demands, <a href=\"https:\/\/www.cnn.com\/2026\/02\/24\/tech\/hegseth-anthropic-ai-military-amodei\" rel=\"nofollow noopener\" target=\"_blank\">CNN reported<\/a> on Tuesday. OpenAI and Anthropic have also been locked in a race to launch new enterprise AI tools in a bid to win the workplace.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000i3b6rgqg7i5dh@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            Jared Kaplan, Anthropic\u2019s chief science officer, suggested in <a href=\"https:\/\/time.com\/7380854\/exclusive-anthropic-drops-flagship-safety-pledge\/\" target=\"_blank\" rel=\"nofollow noopener\">an interview with Time<\/a> that the change was made in the name of safety more than increased competition.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22i0ct000j3b6r8a26rvct@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            \u201cWe felt that it wouldn\u2019t actually help anyone for us to stop training AI models,\u201d Kaplan told the magazine. \u201cWe didn\u2019t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments \u2026 if competitors are blazing ahead.\u201d\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm22q6df000p3b6rz3mlnptw@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            CNN\u2019s Hadas Gold contributed to this story.\n    <\/p>\n<p class=\"paragraph-elevate inline-placeholder vossi-paragraph_elevate\" data-uri=\"cms.cnn.com\/_components\/paragraph\/instances\/cmm2fh87600043b6r1vpo7s6t@published\" data-editable=\"text\" data-component-name=\"paragraph\" data-article-gutter=\"true\">\n            This story has been updated with additional information.\n    <\/p>\n","protected":false},"excerpt":{"rendered":"Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety&hellip;\n","protected":false},"author":2,"featured_media":499531,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-499530","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/499530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=499530"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/499530\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/499531"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=499530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=499530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=499530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}