{"id":493775,"date":"2026-02-21T03:07:21","date_gmt":"2026-02-21T03:07:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/493775\/"},"modified":"2026-02-21T03:07:21","modified_gmt":"2026-02-21T03:07:21","slug":"anthropic-ceo-dario-amodei-is-deeply-uncomfortable-with-tech-leaders-determining-ais-future","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/493775\/","title":{"rendered":"Anthropic CEO Dario Amodei is \u2018deeply uncomfortable\u2019 with tech leaders determining AI\u2019s future"},"content":{"rendered":"<p>Anthropic CEO Dario Amodei doesn\u2019t think he should be the one calling the shots on the guardrails surrounding AI.<\/p>\n<p>In an <a aria-label=\"Go to https:\/\/www.youtube.com\/watch?v=aAPpQC-3EyE\" href=\"https:\/\/www.youtube.com\/watch?v=aAPpQC-3EyE\" rel=\"nofollow noopener\" target=\"_blank\">interview<\/a> with Anderson Cooper on CBS News\u2019 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of big tech companies.<\/p>\n<p>\u201cI think I\u2019m deeply uncomfortable with these decisions being made by a few companies, by a few people,\u201d Amodei said. \u201cAnd this is one reason why I\u2019ve always advocated for responsible and thoughtful regulation of the technology.\u201d<\/p>\n<p>\u201cWho elected you and Sam Altman?\u201d Cooper asked.<\/p>\n<p>\u201cNo one. Honestly, no one,\u201d Amodei replied.<\/p>\n<p>Anthropic has adopted the philosophy of being transparent about the limitations\u2014and dangers\u2014of AI as it continues to develop, he added. Ahead of the interview\u2019s release, the company <a aria-label=\"Go to https:\/\/fortune.com\/2025\/11\/14\/anthropic-disrupted-first-documented-large-scale-ai-cyberattack-claude-agentic\/\" href=\"https:\/\/fortune.com\/2025\/11\/14\/anthropic-disrupted-first-documented-large-scale-ai-cyberattack-claude-agentic\/\" rel=\"nofollow noopener\" target=\"_blank\">said it had thwarted<\/a> \u201cthe first documented case of a large-scale AI cyberattack executed without substantial human intervention.\u201d\u00a0<\/p>\n<p>Anthropic said last week it had <a aria-label=\"Go to https:\/\/www.nytimes.com\/2026\/02\/12\/technology\/anthropic-super-pac-openai.html\" href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/technology\/anthropic-super-pac-openai.html\" rel=\"nofollow noopener\" target=\"_blank\">donated $20 million<\/a> to Public First Action, a super PAC focused on AI safety and regulation\u2014and one that directly opposed super PACs backed by rival OpenAI\u2019s investors.<\/p>\n<p>\u201cAI safety continues to be the highest-level focus,\u201d Amodei <a aria-label=\"Go to https:\/\/fortune.com\/article\/anthropic-ceo-dario-amodei-openai-chatgpt-artificial-intelligence-safety-donald-trump\/\" href=\"https:\/\/fortune.com\/article\/anthropic-ceo-dario-amodei-openai-chatgpt-artificial-intelligence-safety-donald-trump\/\" rel=\"nofollow noopener\" target=\"_blank\">told Fortune<\/a> in a January cover story. \u201cBusinesses value trust and reliability,\u201d he says.<\/p>\n<p>There are <a aria-label=\"Go to https:\/\/www.congress.gov\/crs-product\/R48555#:~:text=No%20federal%20legislation%20establishing%20broad,on%20AI%20has%20been%20enacted.\" href=\"https:\/\/www.congress.gov\/crs-product\/R48555#:~:text=No%2520federal%2520legislation%2520establishing%2520broad,on%2520AI%2520has%2520been%2520enacted.\" rel=\"nofollow noopener\" target=\"_blank\">no federal regulations<\/a> outlining any prohibitions on AI or surrounding the safety of the technology. While <a aria-label=\"Go to https:\/\/www.ncsl.org\/technology-and-communication\/artificial-intelligence-2025-legislation\" href=\"https:\/\/www.ncsl.org\/technology-and-communication\/artificial-intelligence-2025-legislation\" rel=\"nofollow noopener\" target=\"_blank\">all 50 states<\/a> have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency. <\/p>\n<p>Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia <a aria-label=\"Go to https:\/\/www.axios.com\/2025\/05\/13\/mandiant-founder-artificial-intellience-cyberattack\" href=\"https:\/\/www.axios.com\/2025\/05\/13\/mandiant-founder-artificial-intellience-cyberattack\" rel=\"nofollow noopener\" target=\"_blank\">warned<\/a> of the first AI-agent cybersecurity attack happening in the next 12 to 18 months\u2014meaning Anthropic\u2019s disclosure about the thwarted attack was months ahead of Mandia\u2019s predicted schedule.<\/p>\n<p>Amodei has outlined <a aria-label=\"Go to https:\/\/fortune.com\/2023\/07\/10\/anthropic-ceo-dario-amodei-ai-risks-short-medium-long-term\/\" href=\"https:\/\/fortune.com\/2023\/07\/10\/anthropic-ceo-dario-amodei-ai-risks-short-medium-long-term\/\" rel=\"nofollow noopener\" target=\"_blank\">short-, medium-, and long-term risks<\/a> associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.<\/p>\n<p>The concerns mirror those of \u201cgodfather of AI\u201d Geoffrey Hinton, who has <a aria-label=\"Go to https:\/\/fortune.com\/article\/geoffrey-hinton-ai-godfather-tiger-cub\/\" href=\"https:\/\/fortune.com\/article\/geoffrey-hinton-ai-godfather-tiger-cub\/\" rel=\"nofollow noopener\" target=\"_blank\">warned<\/a> AI will have the ability to outsmart and control humans, perhaps in the next decade.\u00a0<\/p>\n<p>The need for greater AI scrutiny and safeguards lay at the core of Anthropic\u2019s 2021 founding. Amodei was previously the vice president of research at Sam Altman\u2019s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei\u2019s efforts to compete with Altman have appeared effective: Anthropic said this month it is <a aria-label=\"Go to https:\/\/fortune.com\/2026\/02\/13\/anthropics-380-billion-valuation-vaults-it-next-to-openai-spacex-among-largest-ipo-candidates\/\" href=\"https:\/\/fortune.com\/2026\/02\/13\/anthropics-380-billion-valuation-vaults-it-next-to-openai-spacex-among-largest-ipo-candidates\/\" rel=\"nofollow noopener\" target=\"_blank\">now valued at $380 billion<\/a>. OpenAI is valued at an estimated $500 billion.)<\/p>\n<p>\u201cThere was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,\u201d Amodei <a aria-label=\"Go to https:\/\/fortune.com\/2023\/09\/26\/anthropic-ceo-interview-quit-open-ai-amazon-investment\/\" href=\"https:\/\/fortune.com\/2023\/09\/26\/anthropic-ceo-interview-quit-open-ai-amazon-investment\/\" rel=\"nofollow noopener\" target=\"_blank\">told Fortune<\/a> in 2023. \u201cOne was the idea that if you pour more compute into these models, they\u2019ll get better and better and that there\u2019s almost no end to this \u2026\u00a0And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.\u201d<\/p>\n<p>Anthropic\u2019s transparency efforts<\/p>\n<p>As Anthropic <a aria-label=\"Go to https:\/\/fortune.com\/2025\/11\/12\/anthropic-50-billion-investment-data-centers-permanent-construction-jobs\/\" href=\"https:\/\/fortune.com\/2025\/11\/12\/anthropic-50-billion-investment-data-centers-permanent-construction-jobs\/\" rel=\"nofollow noopener\" target=\"_blank\">continues to expand<\/a> its data center investments, it has published some of its efforts in addressing the shortcomings and threats of AI. In a May 2025 <a aria-label=\"Go to https:\/\/fortune.com\/2025\/05\/23\/anthropic-ai-claude-opus-4-blackmail-engineers-aviod-shut-down\/\" href=\"https:\/\/fortune.com\/2025\/05\/23\/anthropic-ai-claude-opus-4-blackmail-engineers-aviod-shut-down\/\" rel=\"nofollow noopener\" target=\"_blank\">safety report<\/a>, Anthropic reported some versions of its Opus model threatened blackmail, such as revealing an engineer was having an affair, to avoid shutting down. The company also said the AI model complied with dangerous requests if given harmful prompts like how to plan a terrorist attack, which it said it has since fixed.<\/p>\n<p>Last November, the company said in a blog post that its chatbot Claude <a aria-label=\"Go to https:\/\/fortune.com\/2025\/11\/14\/anthropic-claude-sonnet-woke-ai-trump-neutrality-openai-meta-xai\/\" href=\"https:\/\/fortune.com\/2025\/11\/14\/anthropic-claude-sonnet-woke-ai-trump-neutrality-openai-meta-xai\/\" rel=\"nofollow noopener\" target=\"_blank\">scored a 94% political evenhandedness rating<\/a>, outperforming or matching competitors on neutrality.<\/p>\n<p>In addition to Anthropic\u2019s own research efforts to combat corruption of the technology, Amodei has called for greater legislative efforts to address the risks of AI. In a <a aria-label=\"Go to https:\/\/www.nytimes.com\/2025\/06\/05\/opinion\/anthropic-ceo-regulate-transparency.html\" href=\"https:\/\/www.nytimes.com\/2025\/06\/05\/opinion\/anthropic-ceo-regulate-transparency.html\" rel=\"nofollow noopener\" target=\"_blank\">New York Times op-ed<\/a> in June 2025, he criticized the Senate\u2019s decision to include a provision in President Donald Trump\u2019s policy bill that would put a 10-year moratorium on states regulating AI.<\/p>\n<p>\u201cAI is advancing too head-spinningly fast,\u201d Amodei said. \u201cI believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.\u201d<\/p>\n<p>Criticism of Anthropic<\/p>\n<p>Anthropic\u2019s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta\u2019s then\u2013chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models.\u00a0<\/p>\n<p>\u201cYou\u2019re being played by people who want regulatory capture,\u201d LeCun said in an <a aria-label=\"Go to https:\/\/x.com\/ylecun\/status\/1989364612651966788?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1989364612651966788%7Ctwgr%5E8015ac18256966b11dc4899fbc72ef145005a5bf%7Ctwcon%5Es1_&amp;ref_url=https%3A%2F%2Faimmediahouse.com%2Fgenerative-ai%2Fyann-lecun-questions-anthropic-cyberattack-claim\" href=\"https:\/\/x.com\/ylecun\/status\/1989364612651966788?ref_src=twsrc%255Etfw%257Ctwcamp%255Etweetembed%257Ctwterm%255E1989364612651966788%257Ctwgr%255E8015ac18256966b11dc4899fbc72ef145005a5bf%257Ctwcon%255Es1_&amp;ref_url=https%253A%252F%252Faimmediahouse.com%252Fgenerative-ai%252Fyann-lecun-questions-anthropic-cyberattack-claim\" rel=\"nofollow\">X post<\/a> in response to Connecticut Sen. Chris Murphy\u2019s post expressing concern about the attack. \u201cThey are scaring everyone with dubious studies so that open-source models are regulated out of existence.\u201d\u00a0<\/p>\n<p>Others have said Anthropic\u2019s strategy is one of \u201csafety theater\u201d that amounts to good branding but offers no promises to actually implement safeguards on the technology.<\/p>\n<p>Even some of Anthropic\u2019s own personnel appear to have doubts about a tech company\u2019s ability to regulate itself. Earlier last week, Anthropic AI safety researcher Mrinank Sharma <a aria-label=\"Go to https:\/\/x.com\/MrinankSharma\/status\/2020881722003583421\" href=\"https:\/\/x.com\/MrinankSharma\/status\/2020881722003583421\" rel=\"nofollow\">announced he had resigned<\/a> from the company, saying, \u201cThe world is in peril.\u201d<\/p>\n<p>\u201cThroughout my time here, I\u2019ve repeatedly seen how hard it is to truly let our values govern our actions,\u201d Sharma wrote in his resignation letter. \u201cI\u2019ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society, too.\u201d<\/p>\n<p>Anthropic did not immediately respond to Fortune\u2019s request for comment.<\/p>\n<p>Amodei denied to Cooper that Anthropic was taking part in \u201csafety theater\u201d but admitted on an episode of the\u00a0<a aria-label=\"Go to https:\/\/www.youtube.com\/watch?v=n1E9IZfvGMA\" href=\"https:\/\/www.youtube.com\/watch?v=n1E9IZfvGMA\" rel=\"nofollow noopener\" target=\"_blank\">Dwarkesh Podcast<\/a> last week that the company sometimes <a aria-label=\"Go to https:\/\/fortune.com\/2026\/02\/17\/anthropic-ceo-dario-amodei-balancing-safety-commercial-pressure-ai-race-openai\/\" href=\"https:\/\/fortune.com\/2026\/02\/17\/anthropic-ceo-dario-amodei-balancing-safety-commercial-pressure-ai-race-openai\/\" rel=\"nofollow noopener\" target=\"_blank\">struggles to balance safety and profits<\/a>.<\/p>\n<p>\u201cWe\u2019re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,\u201d he said.<\/p>\n<p>A\u00a0version of this story was published on Fortune.com on Nov. 17, 2025.<\/p>\n<p>More on AI regulation:<\/p>\n","protected":false},"excerpt":{"rendered":"Anthropic CEO Dario Amodei doesn\u2019t think he should be the one calling the shots on the guardrails surrounding&hellip;\n","protected":false},"author":2,"featured_media":493776,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,2729,254,255,64,63,9453,205065,16570,16863,36808,105],"class_list":{"0":"post-493775","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-anthropic","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-au","13":"tag-australia","14":"tag-dario-amodei","15":"tag-evergreen-refresh","16":"tag-regulation","17":"tag-safety","18":"tag-tech-industry","19":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/493775","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=493775"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/493775\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/493776"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=493775"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=493775"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=493775"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}