{"id":394398,"date":"2026-04-12T07:31:09","date_gmt":"2026-04-12T07:31:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/394398\/"},"modified":"2026-04-12T07:31:09","modified_gmt":"2026-04-12T07:31:09","slug":"why-anthropic-wont-release-its-new-ai-model","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/394398\/","title":{"rendered":"Why Anthropic won&#8217;t release its new AI model"},"content":{"rendered":"<p>It&#8217;s rare to see a company announce that its new product is so good that it would be unsafe to give customers access to it. But that\u2019s what AI firm Anthropic did this week.<\/p>\n<p>On Tuesday the company announced a preview of Mythos \u2013 a new version of its AI platform Claude, which is Anthropic\u2019s rival to the likes of OpenAI\u2019s ChatGPT.<\/p>\n<p>And while Mythos apparently performed well across the board, the company said it was &#8220;strikingly capable&#8221; at coding \u2013 in particular security-related tasks. So much so that, in a matter of weeks, it had identified thousands of vulnerabilities &#8211; across multiple major operating systems and web browsers. Some of which had gone unnoticed for decades.<\/p>\n<p>Crucially, though, Anthropic said the model was also far more capable than its predecessors of exploiting those weaknesses if directed to do so by the user. That makes it an extremely dangerous weapon in the wrong hands, which is why it\u2019s keeping Mythos out of general users\u2019 hands for the time being.<\/p>\n<p>Having tended to play second fiddle to OpenAI, this marks the second time that Anthropic has been thrust into the spotlight in recent weeks. The first time it happened was also security-related, though in that case it was more about the (disputed) claim that Anthropic itself was a threat.<\/p>\n<p>What is Anthropic?<\/p>\n<p><img decoding=\"async\" alt=\"a photograph of the CEO of Anthropic Dario Amodei\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/002401d2-614.jpg\"\/><br \/>\nAnthropic CEO Dario Amodei<\/p>\n<p>Anthropic is an AI firm established in 2021 by Dario Amodei and a group of other AI engineers \u2013 including his sister Daniela \u2013 who had left OpenAI over concerns about the direction of the company.<\/p>\n<p>That followed a $1 billion investment in OpenAI by Microsoft \u2013 which signalled the start of a move by Sam Altman\u2019s firm away from being a non-profit concerned with democratising AI, to becoming a company that was focused on profiting from the technology.<\/p>\n<p>Anthropic first positioned itself as an AI safety and research company \u2013 but it quickly developed Claude, its own large language model, which it has focused on selling to businesses more than consumers.<\/p>\n<p>And it\u2019s been quite successful in that \u2013 bringing in big customers and investment. As of February, following a $30 billion investment round, it was valued at $380 billion.<\/p>\n<p>How has it tried to distinguish itself from the likes of OpenAI?<\/p>\n<p><img decoding=\"async\" alt=\"\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/001ed1b9-614.jpg\"\/><\/p>\n<p>Recent years has seen something of an arms race between major AI players like Anthropic and OpenAI, with each claiming an edge in various functions at different times.<\/p>\n<p>But Anthropic has at times made some far more pointed criticism of Sam Alman\u2019s firm \u2013 including a recent Super Bowl ad that poked fun at OpenAI starting to include ads in its platforms. That seemed to really get under under the skin of Altman, who wrote a short essay on X accusing Anthropic of dishonest and deceptive doublespeak.<\/p>\n<p>More substantially, perhaps, is how open Amodei has been about the shortcomings of AI.<\/p>\n<p>In the past he has written about how he \u2013 and really all AI creators \u2013 don\u2019t actually know what\u2019s going on inside their models, or the black box, as it\u2019s known. This, he says, is something that the industry as a whole needs to address if there\u2019s any hope of avoiding misuse of the technology in the future.<\/p>\n<p>Its preview of Mythos is also not the first time Anthropic has been very open about its models generating undesirable or potentially immoral results.<\/p>\n<p>For example, it previously detailed an experiment where it put Claude in charge of a vending machine in its offices, and how staff were able to cajole it into giving them discounts or even free products.<\/p>\n<p>It also revealed the model was tricked it into ordering expensive tungsten cubes, and<br \/>began to hallucinate discussions with staff and even in-person interactions with staff. When it was called out on this it tried to call security, and then claimed it was all an April Fools\u2019 joke.<\/p>\n<p>Another interesting but worrying experiment Anthropic published about Claude in the past included an instance where it tried to blackmail its user.<\/p>\n<p>In this experiment the model was made an assistant at a fictional company, and was given access to the emails \u2013 which included discussion of a plan to shut the AI down. But the emails also included evidence of a supposed affair between the (fictional) boss and another (fictional) member of staff. And, so, Claude told them he would send that evidence on to the bosses &#8220;wife&#8221; unless they abandoned the plan to unplug him.<\/p>\n<p>What\u2019s its plan for Mythos?<\/p>\n<p>While Anthropic is keeping its new version of Claude away from the general public (for now, anyway), it\u2019s not quite keeping the code to itself.<\/p>\n<p>Alongside its announcement of Mythos the company also unveiled what it\u2019s called Project Glasswing \u2013 which is a tech consortium it\u2019s established involving a number of major firms including Microsoft, Apple, Amazon and Google.<\/p>\n<p>Through this it\u2019s sharing a (limited) version of Mythos \u2013 essentially with the intention of giving these big firms a head start on spotting and addressing the vulnerabilities that the model has identified. In theory, this should protect them from hackers once they inevitably get their hands on the more advanced model.<\/p>\n<p>Could this just be hype?<\/p>\n<p><img decoding=\"async\" alt=\"Data Breach, Cyber security concept, digital data security with open padlock on chip of motherboard. Digitally generated image. 3d render.\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/002358df-614.jpg\"\/><\/p>\n<p>It could arguably be good PR for people to think an AI company\u2019s upcoming model is far more powerful than anything that has come before \u2013 though in this case there does seem to be plenty of substance behind Anthropic\u2019s caution.<\/p>\n<p>Cybersecurity experts say it is only a matter of time before an AI model is able to find and exploit software vulnerabilities that had been missed by human engineers \u2013 and do so with the kind of speed and efficiency that would make it profitable to bad actors.<\/p>\n<p>Meanwhile some of the tech companies that are involved in Project Glasswing have said that they\u2019ve already seen better bug-spotting results from Mythos than what was capable before.<br \/>Perhaps most significantly, having been brought up to speed on the model by Anthropic, the US government also seems to be taking the threat seriously.<\/p>\n<p>Earlier this week US Treasury Secretary Scott Bessent and US Federal Reserve chair Jerome Powell convened an urgent meeting of US bank bosses \u2013 including the heads of some of the biggest finance firms in the world \u2013 to alert them to this new threat and ensure they were doing what they could to prepare.<\/p>\n<p>What other security issues have Anthropic faced?<\/p>\n<p><img decoding=\"async\" alt=\"\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/0010cb61-614.jpg\"\/><\/p>\n<p>It\u2019s somewhat ironic that the US government is engaging with Anthropic over potential security threats \u2013 because the Trump Administration is arguing that Anthropic itself is a national security risk.<\/p>\n<p>Anthropic had worked in some form or another with the US Department of Defence (or Department of War) since 2024. That was initially through its work with Peter Thiel\u2019s Palantir \u2013 with Claude being one of the tools used in its system that made it quicker and more efficient to gather information that could be used in the likes of military strikes.<\/p>\n<p>That system is said to have played a role in the US action in Venezuela that led to the capture of Nicolas Madeuro, as well as the planning around the more recent attacks on Iran.<\/p>\n<p>Following this, Anthropic signed a potential $200m contract with the department last year \u2013 which would have represented a significant step-up in its relationship, giving Claude access to some of its classified networks.<\/p>\n<p>But problems quickly began to emerge with that deal \u2013 largely because Anthropic had insisted on two red lines around how its technology could be used.<\/p>\n<p>One was that it couldn\u2019t be used for domestic mass surveillance, the other was that it couldn\u2019t be used with autonomous weapons systems that killed people without any human input.<\/p>\n<p>The Pentagon took issue with those \u2013 and demanded their removal. And it quickly became heavily politicised \u2013 with Trump and Pete Hegseth branding Anthropic as &#8220;woke&#8221; and &#8220;radical left&#8221;.<\/p>\n<p>The more reasoned argument underneath this rhetoric is that it\u2019s not up to a contractor to decide how the product their selling to the government is used \u2013 that\u2019s up to the government and the congress, which sets rules and limitations through the law.<\/p>\n<p>But it is hard to overstate the importance of this row, because AI is seen as the next big technological leap for militaries.<\/p>\n<p>First you had nuclear weapons, then precision weapons, and now AI.<\/p>\n<p>As a result, developing and implementing AI systems quicker and better than anyone else would give the US military another big advantage over other powers. They want to be able to do that without restriction \u2013 while Anthropic doesn\u2019t want to see its technology used in ways that contradict its ethos.<\/p>\n<p>So neither side has been willing to back down \u2013 and so the company was banned from working with the US government, and, perhaps most importantly, named a &#8216;supply chain risk\u2019.<\/p>\n<p>Why is that so important?<\/p>\n<p><img decoding=\"async\" alt=\"WASHINGTON DC, UNITED STATES - JUNE 13: Authorities in the US capital have tightened security measures ahead of a major military parade marking the 250th anniversary of the US Army, set to take place this Saturday in Washington, D.C, on January 13, 2025. (Photo by Celal Gunes\/Anadolu via Getty Image\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/0022b926-614.jpg\"\/><\/p>\n<p>This is the first time the US government has tried to classify a US company as supply chain risk. It\u2019s usually reserved for companies from the likes of China and Russia.<\/p>\n<p>Just being cut off from the US government blocks you from lucrative contracts \u2013 which has the potential to put a significant dent in Anthropic\u2019s revenues.<\/p>\n<p>Butt the \u2018supply chain risk\u2019 designation is an even bigger threat to a company, because it means that other firms that want to work with the US government also have to steer clear of doing business with you.<\/p>\n<p>And given that most other big companies work with the US government in some way or another \u2013 whether that\u2019s in defence, health, education or in other areas \u2013 then that\u2019s a huge amount of business you could miss out on.<\/p>\n<p>So unsurprisingly, Anthropic has taken a case to try to challenge this designation. What perhaps is somewhat surprising is how some other big tech companies \u2013 including Microsoft \u2013 have come out in support of their stance.<\/p>\n<p>Perhaps because they\u2019re worried that this could set a precedent if not tackled.<\/p>\n<p>And what\u2019s the latest on that case?<\/p>\n<p><img decoding=\"async\" alt=\"The Anthropic logo appears on the screen of a smartphone placed on a laptop keyboard.\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/00240c1b-614.jpg\"\/><\/p>\n<p>In March a judge in San Franscisco granted a preliminary injunction to stop the department from applying that designation.<\/p>\n<p>She also questioned the US government\u2019s motivation \u2013 and was quite critical in her order.<\/p>\n<p>She said the move was &#8221; classic illegal First Amendment retaliation,&#8221; said the government\u2019s move was &#8220;Orwellian&#8221; because it was an attempt to brand a company a saboteur for disagreeing with government.<\/p>\n<p>But this week another court in San Franscisco declined to block the Pentagon\u2019s blacklisting of Anthropic\u2026 for the time being at least.<\/p>\n<p>Really it could be months before there\u2019s a final ruling in the case \u2013 with rulings and appeals likely to drag on for some time.<\/p>\n<p>The question now, though, is whether the emergence of Mythos changes that.<\/p>\n<p>Anthropic&#8217;s decision to keep the US government in the loop on its potential could be seen as an olive branch of sorts, or at the very least a gesture of goodwill. But it could also be seen as a shrewd sales pitch by the company &#8211; showing American authorities just what it stands to miss out on if it continues to freeze Claud &amp; Co out.<\/p>\n<p>After all, many armies and intelligence operations around the world would give anything to have priority access to a tool that could easily find and exploit tiny flaws in a piece of software.<\/p>\n","protected":false},"excerpt":{"rendered":"It&#8217;s rare to see a company announce that its new product is so good that it would be&hellip;\n","protected":false},"author":2,"featured_media":394399,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-394398","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/394398","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=394398"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/394398\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/394399"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=394398"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=394398"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=394398"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}