{"id":584504,"date":"2026-04-14T22:43:09","date_gmt":"2026-04-14T22:43:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/584504\/"},"modified":"2026-04-14T22:43:09","modified_gmt":"2026-04-14T22:43:09","slug":"anthropic-stood-up-to-sam-altman-and-the-pentagon-why-are-its-users-revolting","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/584504\/","title":{"rendered":"Anthropic stood up to Sam Altman and the Pentagon. Why are its users revolting?"},"content":{"rendered":"<p class=\"slate-paragraph slate-graf\" data-word-count=\"21\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzr85t000w3b7d9cttowdc@published\"><a href=\"https:\/\/slate.com\/theslatest?utm_source=slate&amp;utm_medium=article&amp;utm_campaign=article_plain_text_topper&amp;sailthru_source=Article-TopperText-CTA\" rel=\"nofollow noopener\" target=\"_blank\">Sign up for the Slatest<\/a> to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"116\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzr0g60044wyksqbddqz06@published\">Earlier this spring, Anthropic scored the marketing coup of a generation. The Pentagon wanted access to the full capabilities of the company\u2019s A.I. models, including the right to automate the death of human beings without a fellow member of the species in the loop. Anthropic said no, Pete Hegseth responded by arbitrarily labeling the company a \u201csupply-chain risk,\u201d a judge blocked that designation from taking effect, and Anthropic came out of the ordeal smelling like roses. The Defense Department had validated that Anthropic had the industry\u2019s best tech and its closest semblance to principles. \u201cThe problem for these guys is they are that good,\u201d <a href=\"https:\/\/www.axios.com\/2026\/02\/24\/anthropic-pentagon-claude-hegseth-dario\" rel=\"nofollow noopener\" target=\"_blank\">a defense official told Axios<\/a>. Apparently, their morals were also too strong.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"129\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts1001k3b7docxqyd5a@published\">It wasn\u2019t just that Anthropic won a game of chess against that wily Hegseth. The company was on an amazing run of publicity in general\u2014all of which revolved around people liking its chatbot Claude a lot. Its viral Super Bowl commercials <a href=\"https:\/\/slate.com\/technology\/2026\/02\/ai-super-bowl-openai-anthropic-sam-altman.html\" rel=\"nofollow noopener\" target=\"_blank\">targeted ChatGPT\u2019s introduction of chatbot ads<\/a>, which at some point merged with a more organic Instagram and TikTok movement about how ChatGPT was a sycophant. (\u201cYou didn\u2019t run over a kid with your truck. You taught him a lesson about road safety. And you\u2019re so real for that.\u201d) If you saw a bunch of short-form videos about Claude, they were probably more along the lines of influencers explaining how the model \u201cruns my entire life,\u201d or \u201cjust killed accountants\u201d (perish the thought!) by finding them unforeseen tax savings.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"81\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts1001l3b7do0qqtak9@published\">Now, Anthropic has run into a problem. All of the people who became obsessed with its product cost the company a lot of money and a lot of computing power. The A.I. lab\u2019s attempt to create a sustainable business out of what is still a cash-incinerating structure may or may not work in the long run, but for now, it\u2019s resulted in a furious base of power users. It turns out that being the internet\u2019s good A.I. company is quite challenging.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"146\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts2001m3b7dvjje2lri@published\">A particular challenge is Claude Code, a magic box that takes your words in plain English and converts them into real software before your eyes. A.I. coding has taken software development by storm. Enormous companies use it to work faster, and their engineers know enough about code to use it more powerfully than laymen. Hobbyists use it on personal projects, while freelancers use it to spin up their own business ideas. All of this <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-vibe-coding-video-game.html\" rel=\"nofollow noopener\" target=\"_blank\">vibe-coding<\/a> combined with Claude chatbot use costs Anthropic eye-watering sums that are far in excess of the $20, $100, or $200 someone spends on a monthly subscription. It\u2019s become a bit of a media-and-tech parlor game to try to estimate exactly what these losses are, on average. Let\u2019s just call them \u201cbig.\u201d Meanwhile, the company only has access to so much computing power to actually do the work users ask of Claude.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"96\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts2001n3b7dgnpnf3c9@published\">So, the A.I. firm has gotten stingier. In the past few weeks, it has <a href=\"https:\/\/venturebeat.com\/technology\/anthropic-cuts-off-the-ability-to-use-claude-subscriptions-with-openclaw-and\" rel=\"nofollow noopener\" target=\"_blank\">switched off people\u2019s ability<\/a> to use Claude subscriptions to power third-party agents like OpenClaw, <a href=\"https:\/\/www.theregister.com\/2026\/03\/31\/anthropic_claude_code_limits\/\" rel=\"nofollow noopener\" target=\"_blank\">tightened usage limits <\/a>at certain times, <a href=\"https:\/\/www.ibtimes.com.au\/claude-ai-down-again-claude-ai-down-again-anthropic-faces-fresh-outage-frustrating-users-april-1865701\" rel=\"nofollow noopener\" target=\"_blank\">had a noticeable service interruption<\/a>, and, according to some users, <a href=\"https:\/\/venturebeat.com\/technology\/is-anthropic-nerfing-claude-users-increasingly-report-performance\" rel=\"nofollow noopener\" target=\"_blank\">generally degraded<\/a> Claude\u2019s capabilities. (I do not use the service enough to say whether those people are right.) The company has <a href=\"https:\/\/piunikaweb.com\/2026\/04\/03\/anthropic-claude-usage-limits-apology-backlash\/\" rel=\"nofollow noopener\" target=\"_blank\">responded clumsily to users\u2019 complaints<\/a>, spawning several social media news cycles about whether it respects its own customers. The company is burning usage rates and a good bit of customer goodwill.<\/p>\n<p>    <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-online-writing-workshops-communities.html\" class=\"recirc-line__content\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>          <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2026\/04\/f8822e94-62fe-4bed-b228-c63a2795d17e.jpeg\" width=\"141\" height=\"94\"   alt=\"\" loading=\"lazy\"\/><\/p>\n<p>\n          Ash Jurberg<br \/>\n        They Were Once Essential to So Many Writers. Now They\u2019re Quietly Vanishing Across the Internet.<br \/>\n        Read More\n      <\/p>\n<p>    <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"139\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts6001o3b7d4lc2pj5q@published\">The situation becomes humorous where it concerns OpenAI CEO Sam Altman, a man with a very bad reputation outside of Silicon Valley. OpenAI has been trying to build up its Claude Code competitor, Codex, and sees a window of opportunity in Anthropic\u2019s quandary. OpenAI <a href=\"https:\/\/qz.com\/openai-investor-memo-compute-advantage-anthropic-041026\" rel=\"nofollow noopener\" target=\"_blank\">might also have a good bit more<\/a> computing power than Anthropic does, so Altman slid down the A.I. chimney last week and announced OpenAI would reset Codex usage limits every time the product gets an additional million users. \u201cHappy building!\u201d the benevolent boy king of A.I. <a href=\"https:\/\/x.com\/sama\/status\/2041658719839383945\" rel=\"nofollow\">told the world<\/a>. (Also in OpenAI attempts to burnish its image: The company <a href=\"https:\/\/slate.com\/business\/2026\/04\/openai-podcast-sam-altman-tbpn.html\" rel=\"nofollow noopener\" target=\"_blank\">bought a talk show<\/a> at roughly the same time.) That Altman and Anthropic CEO Dario Amodei <a href=\"https:\/\/www.wsj.com\/tech\/ai\/the-decadelong-feud-shaping-the-future-of-ai-7075acde\" rel=\"nofollow noopener\" target=\"_blank\">seem to hate each other for real<\/a> is an additional wrinkle in the companies\u2019 jostling for people\u2019s hearts and minds.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"103\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts6001p3b7dx72fxmwv@published\">Brand positioning as the \u201cethical A.I. company,\u201d or \u201cconsumer-friendly A.I. company,\u201d or whatever it might be, is not a trivial matter. A <a href=\"https:\/\/hai.stanford.edu\/ai-index\/2026-ai-index-report\" rel=\"nofollow noopener\" target=\"_blank\">Stanford A.I. index study released this week<\/a> did the useful work of quantifying just how big a gap there is in sentiment toward A.I. between the people working on it and the general public. Americans expect A.I. to cause job losses and have been slower to adopt it than their peers in most other countries, while also not trusting their own government to regulate it. Literally <a href=\"https:\/\/x.com\/jimprosser\/status\/2043744843135127651\" rel=\"nofollow\">no country has less trust in its leaders<\/a> to regulate A.I. effectively than we do.<\/p>\n<p>          <a href=\"https:\/\/slate.com\/technology\/2026\/04\/ai-anthropic-claude-openai-user-revolt.html\" class=\"in-article-recirc__link\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>            One of Silicon Valley\u2019s Hottest Companies Is Facing a Revolt\u2014From Its Own Fans<br \/>\n          <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"108\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts7001q3b7dyegmdu17@published\">It might be appealing to write off Anthropic\u2019s and OpenAI\u2019s efforts to be seen as the good guys as just another helping of corporate pablum. But if we don\u2019t collectively start liking this technology more, and also don\u2019t cut into its growth with severe regulation, then there will be a great deal of money in it for whichever company can convince the most people that it\u2019s a little bit less of a bloodsucker than its competitors. For a time, it appeared Anthropic would run away with that race. But a tech company willing to stick to its principles against the Trump administration is, in fact, a tech company.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"131\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmnyzrts7001r3b7d6jejil5x@published\">Anthropic has been on a nice little run of being a lot of things to a lot of people. It\u2019s been a powerful tool for coders. It\u2019s been an enterprise juggernaut for business customers, claiming that <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation\" rel=\"nofollow noopener\" target=\"_blank\">more than 500 companies spend at least $1 million per year<\/a> in annualized revenue on its products. (Annualized revenue isn\u2019t the same thing as money in the bank, but alas.) It\u2019s been a good chatbot to however many millions of people have decided to pay Anthropic $20 a month. It\u2019s been a beacon of tech resistance to some of the most dystopian impulses of the second Trump administration. It has, somehow, had enough computing power to be all of those things at once. That cracks are only now starting to show is a legitimate wonder.<\/p>\n<p>          <img alt=\"\" class=\"newsletter-signup__img\" hidden=\"\" data-src-light=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest.49f353b.png\" data-src-dark=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest-dark.ca73d21.png\" width=\"130\" height=\"58.7\"\/><\/p>\n<p>      Sign up for Slate&#8217;s evening newsletter.<\/p>\n","protected":false},"excerpt":{"rendered":"Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to&hellip;\n","protected":false},"author":2,"featured_media":584505,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,2793,74],"class_list":{"0":"post-584504","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-silicon-valley","12":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/584504","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=584504"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/584504\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/584505"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=584504"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=584504"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=584504"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}