{"id":241208,"date":"2025-10-21T14:27:07","date_gmt":"2025-10-21T14:27:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/241208\/"},"modified":"2025-10-21T14:27:07","modified_gmt":"2025-10-21T14:27:07","slug":"a-statement-from-dario-amodei-on-anthropics-commitment-to-american-ai-leadership-anthropic","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/241208\/","title":{"rendered":"A statement from Dario Amodei on Anthropic&#8217;s commitment to American AI leadership \\ Anthropic"},"content":{"rendered":"<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">A statement from Anthropic CEO Dario Amodei on Anthropic\u2019s commitment to advancing America&#8217;s leadership in building powerful and beneficial AI.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Anthropic is built on a simple principle: AI should be a force for <a href=\"https:\/\/www.darioamodei.com\/essay\/machines-of-loving-grace\" rel=\"nofollow noopener\" target=\"_blank\">human progress, not peril<\/a>. That means making products that are <a href=\"https:\/\/www.anthropic.com\/news\/claude-for-life-sciences\" rel=\"nofollow noopener\" target=\"_blank\">genuinely useful<\/a>, speaking honestly about risks and benefits, and working with anyone serious about getting this right. I strongly agree with Vice President JD Vance&#8217;s <a href=\"https:\/\/www.newsmax.com\/newsmax-tv\/vance-artificial-intelligence-ai\/2025\/10\/16\/id\/1230684\/\" rel=\"nofollow noopener\" target=\"_blank\">recent comments<\/a> on AI\u2014particularly his point that we need to maximize applications that help people, like breakthroughs in medicine and disease prevention, while minimizing the harmful ones. This position is both wise and what the public <a href=\"https:\/\/www.pewresearch.org\/internet\/2025\/04\/03\/views-of-risks-opportunities-and-regulation-of-ai\/#6d2b9b266433bfda6c8fc2f498738a4c\" rel=\"nofollow noopener\" target=\"_blank\">overwhelmingly wants<\/a>.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Anthropic is the fastest-growing software company in history, with revenue growing from a $1B to $7B run rate over the last nine months, and we&#8217;ve managed to do this while deploying AI thoughtfully and responsibly. There are products we will not build and risks we will not take, even if they would make money.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Our longstanding position is that managing the societal impacts of AI should be a matter of policy over politics. I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Despite our track record of communicating frequently and transparently about our positions, there has been a recent uptick in inaccurate claims about Anthropic&#8217;s policy stances. Some are significant enough that they warrant setting the record straight.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Our alignment with the Trump administration on key areas of AI policy<\/p>\n<p>We work directly with the federal government in several ways. In July the Department of War <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations\" rel=\"nofollow noopener\" target=\"_blank\">awarded<\/a> Anthropic a two-year, $200 million agreement to prototype frontier AI capabilities that advance national security. We have <a href=\"https:\/\/www.anthropic.com\/news\/offering-expanded-claude-access-across-all-three-branches-of-government\" rel=\"nofollow noopener\" target=\"_blank\">partnered<\/a> with the General Services Administration to offer Claude for Enterprise and Claude for Government for $1 across the federal government. And Claude is deployed across classified networks through partners like <a href=\"https:\/\/investors.palantir.com\/news-details\/2024\/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations\/\" rel=\"nofollow noopener\" target=\"_blank\">Palantir<\/a> and at <a href=\"https:\/\/www.anthropic.com\/news\/lawrence-livermore-national-laboratory-expands-claude-for-enterprise-to-empower-scientists-and\" rel=\"nofollow noopener\" target=\"_blank\">Lawrence Livermore National Laboratory<\/a>.Anthropic <a href=\"https:\/\/x.com\/AnthropicAI\/status\/1948105498194014303\" rel=\"nofollow\">publicly praised<\/a> President Trump\u2019s AI Action Plan. We have been supportive of the President\u2019s efforts to expand energy provision in the US in order to win the AI race, and I <a href=\"https:\/\/www.anthropic.com\/news\/investing-in-energy-to-secure-america-s-ai-future\" rel=\"nofollow noopener\" target=\"_blank\">personally attended<\/a> an AI and energy summit in Pennsylvania with President Trump, where he and I had a good conversation about US leadership in AI. Anthropic\u2019s Chief Product Officer attended a White House event where we <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-signs-cms-health-tech-ecosystem-pledge-to-advance-healthcare-interoperability\" rel=\"nofollow noopener\" target=\"_blank\">joined a pledge<\/a> to accelerate healthcare applications of AI, and our Head of External Affairs attended the White House\u2019s <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-signs-pledge-to-americas-youth-investing-in-ai-education\" rel=\"nofollow noopener\" target=\"_blank\">AI Education Taskforce event<\/a> to support their efforts to advance AI fluency for teachers.Every major AI company has <a href=\"https:\/\/www.politico.com\/news\/2025\/08\/17\/sam-altman-chatgpt-california-00449492\" rel=\"nofollow noopener\" target=\"_blank\">hired<\/a> policy experts from both parties and recent administrations\u2014Anthropic is no different. We&#8217;ve hired Republicans and Democrats alike, and built an <a href=\"https:\/\/www.anthropic.com\/news\/introducing-the-anthropic-national-security-and-public-sector-advisory-council\" rel=\"nofollow noopener\" target=\"_blank\">advisory council<\/a> that includes senior former Trump administration officials. Anthropic makes hiring decisions based on candidates&#8217; expertise, integrity, and competence, not their political affiliations.We (and <a href=\"https:\/\/www.rga.org\/republican-governors-praise-one-big-beautiful-bill-urge-congress-allow-states-protect-citizens-misuse-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">many<\/a> <a href=\"https:\/\/www.scag.gov\/media\/opvgxagq\/2025-05-15-letter-to-congress-re-proposed-ai-preemption-_final.pdf\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">other<\/a> <a href=\"https:\/\/www.riaa.com\/riaa-chairman-ceo-mitch-glazier-statement-on-state-ai-ban-following-us-senate-99-1-vote\/\" rel=\"nofollow noopener\" target=\"_blank\">organizations<\/a>) respectfully disagreed with a single proposed amendment in the One Big Beautiful Bill: the 10-year moratorium on state-level AI laws, which would have blocked any action without offering a federal alternative. That specific provision was <a href=\"https:\/\/www.reuters.com\/legal\/government\/us-senate-strikes-ai-regulation-ban-trump-megabill-2025-07-01\/\" rel=\"nofollow noopener\" target=\"_blank\">voted down<\/a> by Republicans and Democrats in a 99-1 vote in the Senate. Our longstanding position has been that a uniform federal approach is preferable to a patchwork of state laws. I <a href=\"https:\/\/www.nytimes.com\/2025\/06\/05\/opinion\/anthropic-ceo-regulate-transparency.html\" rel=\"nofollow noopener\" target=\"_blank\">proposed such a standard<\/a> months ago and we\u2019re ready to work with both parties to make it happen.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Our preference for a national AI standard<\/p>\n<p>While we continue to advocate for that federal standard, AI is moving so fast that we can\u2019t wait for Congress to act. We therefore supported a carefully designed bill in California where most of America\u2019s leading AI labs are headquartered, including Anthropic. This bill, SB 53, requires the largest AI developers to make their frontier model safety protocols public and is <a href=\"https:\/\/legiscan.com\/CA\/text\/SB53\/id\/3270002\" rel=\"nofollow noopener\" target=\"_blank\">written to exempt<\/a> any company with an annual gross revenue below $500M\u2014therefore only applying to the very largest AI companies. Anthropic supported this exemption to protect startups and in fact proposed an <a href=\"https:\/\/www.anthropic.com\/news\/the-need-for-transparency-in-frontier-ai\" rel=\"nofollow noopener\" target=\"_blank\">early version<\/a> of it.Some have suggested that we are somehow interested in harming the startup ecosystem. Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is powering <a href=\"https:\/\/www.inc.com\/ben-sherry\/after-partnering-with-anthropic-replit-has-grown-revenue-by-10x\/91147509\" rel=\"nofollow noopener\" target=\"_blank\">an entirely new generation<\/a> of AI-native companies. Damaging that ecosystem makes no sense for us.I&#8217;ve heard arguments that state AI regulation could slow down the US AI industry and hand China the lead. But the real risk to American AI leadership isn&#8217;t a single state law that only applies to the largest companies\u2014it&#8217;s filling the PRC&#8217;s data centers with US chips <a href=\"https:\/\/newsletter.semianalysis.com\/p\/huawei-ascend-production-ramp?_gl=1*utj4i9*_ga*MTUxNjAyMzM1OS4xNzYwODI5ODE4*_ga_FKWNM9FBZ3*czE3NjA4Mjk4MTgkbzEkZzAkdDE3NjA4Mjk4MTgkajYwJGwwJGgxMzkwNzQ5OTc3\" rel=\"nofollow noopener\" target=\"_blank\">they can&#8217;t make themselves<\/a>. We agree with leaders like Senators <a href=\"https:\/\/www.cotton.senate.gov\/news\/press-releases\/cotton-introduces-bill-to-prevent-diversion-of-advanced-chips-to-americas-adversaries-and-protect-us-product-integrity\" rel=\"nofollow noopener\" target=\"_blank\">Tom Cotton<\/a> and <a href=\"https:\/\/www.hawley.senate.gov\/hawley-introduces-legislation-to-decouple-american-ai-development-from-communist-china\/\" rel=\"nofollow noopener\" target=\"_blank\">Josh Hawley<\/a> that this would only help the Chinese Communist Party win the race to the AI frontier. We are the only frontier AI company to <a href=\"https:\/\/www.anthropic.com\/news\/updating-restrictions-of-sales-to-unsupported-regions\" rel=\"nofollow noopener\" target=\"_blank\">restrict<\/a> the selling of AI services to PRC-controlled companies, forgoing significant short-term revenue to prevent fueling AI platforms and applications that would help the Chinese Communist Party&#8217;s military and intelligence services.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Our progress on an AI industry-wide challenge: model bias<\/p>\n<p>Some have claimed that Anthropic&#8217;s models are uniquely politically biased. This is not only unfounded but directly contradicted by the data. A January <a href=\"https:\/\/manhattan.institute\/article\/measuring-political-preferences-in-ai-systems-an-integrative-approach\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> from the Manhattan Institute, a conservative think tank, found Anthropic&#8217;s main model (at the time, Claude Sonnet 3.5) to be less politically biased than models from most of the other major providers. Data from a <a href=\"https:\/\/www.gsb.stanford.edu\/faculty-research\/working-papers\/measuring-perceived-slant-large-language-models-through-user\" rel=\"nofollow noopener\" target=\"_blank\">Stanford study<\/a> in May, on user perceptions of bias in AI models, shows no reason to single out Anthropic: many models from other providers were rated as more biased. The system cards for our latest models, <a href=\"https:\/\/assets.anthropic.com\/m\/12f214efcc2f457a\/original\/Claude-Sonnet-4-5-System-Card.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Sonnet 4.5<\/a> and <a href=\"https:\/\/assets.anthropic.com\/m\/99128ddd009bdcb\/Claude-Haiku-4-5-System-Card.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Haiku 4.5<\/a>, show that we\u2019re making rapid progress towards our goal of political neutrality.As a broader point, no AI model, from any provider, is fully politically balanced in every reply. Models learn from their training data in ways that are not yet well-understood, and developers are never fully in control of their outputs. Anyone can cherry-pick outputs from any model to make it appear slanted in a particular direction.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">Anthropic is committed to constructive engagement on matters of public policy. When we agree, we say so. When we don\u2019t, we propose an alternative for consideration. We do this because we are a public benefit corporation with a mission to ensure that AI benefits everyone and to maintain America&#8217;s lead in AI. Again, we believe we share those goals with the Trump administration, both sides of Congress, <a href=\"https:\/\/news.gallup.com\/poll\/694685\/americans-prioritize-safety-data-security.aspx\" rel=\"nofollow noopener\" target=\"_blank\">and the public<\/a>. We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.<\/p>\n<p class=\"Body_reading-column__t7kGM paragraph-m post-text\">In his <a href=\"https:\/\/www.newsmax.com\/newsmax-tv\/vance-artificial-intelligence-ai\/2025\/10\/16\/id\/1230684\/\" rel=\"nofollow noopener\" target=\"_blank\">recent remarks<\/a>, the Vice President also said of AI, &#8220;Is it good or is it bad, or is it going to help us or going to hurt us? The answer is probably both, and we should be trying to maximize as much of the good and minimize as much of the bad.&#8221; That perfectly captures our view. We&#8217;re ready to work in good faith with anyone of any political stripe to make that vision a reality.<\/p>\n","protected":false},"excerpt":{"rendered":"A statement from Anthropic CEO Dario Amodei on Anthropic\u2019s commitment to advancing America&#8217;s leadership in building powerful and&hellip;\n","protected":false},"author":2,"featured_media":241209,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-241208","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/241208","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=241208"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/241208\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/241209"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=241208"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=241208"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=241208"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}