Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.

Earlier this spring, Anthropic scored the marketing coup of a generation. The Pentagon wanted access to the full capabilities of the company’s A.I. models, including the right to automate the death of human beings without a fellow member of the species in the loop. Anthropic said no, Pete Hegseth responded by arbitrarily labeling the company a “supply-chain risk,” a judge blocked that designation from taking effect, and Anthropic came out of the ordeal smelling like roses. The Defense Department had validated that Anthropic had the industry’s best tech and its closest semblance to principles. “The problem for these guys is they are that good,” a defense official told Axios. Apparently, their morals were also too strong.

It wasn’t just that Anthropic won a game of chess against that wily Hegseth. The company was on an amazing run of publicity in general—all of which revolved around people liking its chatbot Claude a lot. Its viral Super Bowl commercials targeted ChatGPT’s introduction of chatbot ads, which at some point merged with a more organic Instagram and TikTok movement about how ChatGPT was a sycophant. (“You didn’t run over a kid with your truck. You taught him a lesson about road safety. And you’re so real for that.”) If you saw a bunch of short-form videos about Claude, they were probably more along the lines of influencers explaining how the model “runs my entire life,” or “just killed accountants” (perish the thought!) by finding them unforeseen tax savings.

Now, Anthropic has run into a problem. All of the people who became obsessed with its product cost the company a lot of money and a lot of computing power. The A.I. lab’s attempt to create a sustainable business out of what is still a cash-incinerating structure may or may not work in the long run, but for now, it’s resulted in a furious base of power users. It turns out that being the internet’s good A.I. company is quite challenging.

A particular challenge is Claude Code, a magic box that takes your words in plain English and converts them into real software before your eyes. A.I. coding has taken software development by storm. Enormous companies use it to work faster, and their engineers know enough about code to use it more powerfully than laymen. Hobbyists use it on personal projects, while freelancers use it to spin up their own business ideas. All of this vibe-coding combined with Claude chatbot use costs Anthropic eye-watering sums that are far in excess of the $20, $100, or $200 someone spends on a monthly subscription. It’s become a bit of a media-and-tech parlor game to try to estimate exactly what these losses are, on average. Let’s just call them “big.” Meanwhile, the company only has access to so much computing power to actually do the work users ask of Claude.

So, the A.I. firm has gotten stingier. In the past few weeks, it has switched off people’s ability to use Claude subscriptions to power third-party agents like OpenClaw, tightened usage limits at certain times, had a noticeable service interruption, and, according to some users, generally degraded Claude’s capabilities. (I do not use the service enough to say whether those people are right.) The company has responded clumsily to users’ complaints, spawning several social media news cycles about whether it respects its own customers. The company is burning usage rates and a good bit of customer goodwill.

Ash Jurberg
They Were Once Essential to So Many Writers. Now They’re Quietly Vanishing Across the Internet.
Read More

The situation becomes humorous where it concerns OpenAI CEO Sam Altman, a man with a very bad reputation outside of Silicon Valley. OpenAI has been trying to build up its Claude Code competitor, Codex, and sees a window of opportunity in Anthropic’s quandary. OpenAI might also have a good bit more computing power than Anthropic does, so Altman slid down the A.I. chimney last week and announced OpenAI would reset Codex usage limits every time the product gets an additional million users. “Happy building!” the benevolent boy king of A.I. told the world. (Also in OpenAI attempts to burnish its image: The company bought a talk show at roughly the same time.) That Altman and Anthropic CEO Dario Amodei seem to hate each other for real is an additional wrinkle in the companies’ jostling for people’s hearts and minds.

Brand positioning as the “ethical A.I. company,” or “consumer-friendly A.I. company,” or whatever it might be, is not a trivial matter. A Stanford A.I. index study released this week did the useful work of quantifying just how big a gap there is in sentiment toward A.I. between the people working on it and the general public. Americans expect A.I. to cause job losses and have been slower to adopt it than their peers in most other countries, while also not trusting their own government to regulate it. Literally no country has less trust in its leaders to regulate A.I. effectively than we do.

One of Silicon Valley’s Hottest Companies Is Facing a Revolt—From Its Own Fans

It might be appealing to write off Anthropic’s and OpenAI’s efforts to be seen as the good guys as just another helping of corporate pablum. But if we don’t collectively start liking this technology more, and also don’t cut into its growth with severe regulation, then there will be a great deal of money in it for whichever company can convince the most people that it’s a little bit less of a bloodsucker than its competitors. For a time, it appeared Anthropic would run away with that race. But a tech company willing to stick to its principles against the Trump administration is, in fact, a tech company.

Anthropic has been on a nice little run of being a lot of things to a lot of people. It’s been a powerful tool for coders. It’s been an enterprise juggernaut for business customers, claiming that more than 500 companies spend at least $1 million per year in annualized revenue on its products. (Annualized revenue isn’t the same thing as money in the bank, but alas.) It’s been a good chatbot to however many millions of people have decided to pay Anthropic $20 a month. It’s been a beacon of tech resistance to some of the most dystopian impulses of the second Trump administration. It has, somehow, had enough computing power to be all of those things at once. That cracks are only now starting to show is a legitimate wonder.

Sign up for Slate’s evening newsletter.