Anthropic’s new AI coding tools have rattled markets this week, amid fears the start-up is upending traditional software development in ways that will disrupt sectors from publishing and advertising to law.
The San Francisco-based start-up has unveiled a set of tools that allow users to generate, deploy and automate software using generative AI, sharply reducing the technical expertise traditionally required to write and maintain code.
Technologists suggest Anthropic’s advances will undermine the economics of software development and squeeze specialist providers of AI tools, such as in legal services.
“It was very clear that we will never ever write code by hand again,” said Aditya Agarwal, the former chief technology officer at Dropbox. “Something I was very good at is now free and abundant.”

In 2025, Anthropic launched Claude Code, which uses large language models to generate lines of computer code. It quickly became the gold standard for AI-generated coding, and reached $1bn in revenue in only six months.
Claude Code helped to kick-start a “vibe coding” boom, allowing users to quickly create applications and software. But the tool still required technical skills and expert coders to review the output.
In January, Anthropic launched Cowork, which lets users take advantage of Claude Code to automate work tasks, such as summarising documents, using AI models without needing technical skills.
Last Friday, Anthropic went further, launching freely available “open source” plug-ins for Cowork. One of them included a tool for legal services, which lets users do tasks such as automating contract review. The company also rolled out tools tailored for sales, finance, marketing and customer support.
“The simplest way to think about Claude Code is that it is a chatbot that can do stuff,” said Guillaume Princen, Anthropic’s head of digital native businesses. “What Claude Code was for developers, Cowork is for knowledge workers,” he added.
Despite billions of dollars spent by Silicon Valley groups such as OpenAI and Google on rival products, Anthropic has gained a strong lead in AI-assisted coding by pioneering several techniques.
Claude Code helped to kick-start a ‘vibe coding’ boom © Anthropic
A popular way that AI labs train their models is known as reinforcement learning from human feedback (RLHF), where humans label whether or not the model’s output is desirable. The process is time-consuming, expensive and laborious. Some expert data labellers are paid thousands of dollars an hour.
Anthropic has pioneered a complementary technique called reinforcement learning from AI feedback (RLAIF). This works by letting the AI model rate and criticise the answers it generates, based on guidelines set by humans. If the model’s response doesn’t match its guiding principles, the AI model will correct its response.
While initially designed as a technique to make its AI models safer, the company has also inadvertently found a way to automate the improvement of its models — at a much larger scale. Anthropic’s AI models, such as Claude 4.5 Opus, also top independent benchmarks that measure coding capabilities.
Recommended
Anthropic said about 90 per cent of the code behind Claude Code was generated using the tool itself, with between 70 and 90 per cent of code across the company now written with the AI.
Unlike competitors such as OpenAI and Google, which are also trying to win over consumers, Anthropic has focused its efforts on enterprise uses such as software development. On Wednesday, the company pledged it would not run advertising, even as competitors such as OpenAI embrace ads as a way of generating new revenues.
The second breakthrough from Anthropic is an open-source tool called the model context protocol (MCP). It works as a bridge between AI models and applications and databases, and lets LLMs process information and take actions in real time in applications such as Slack.
MCP is the technology that allows Anthropic to create tools that organisations can easily adopt and plug into their computer systems and workflows. The start-up has also made the technology open source, meaning anyone can adapt and use it.
Anthropic new plug-ins include a series of tools for the legal profession. These offered a much simpler and cheaper way for firms to access cutting-edge technologies, said Nick West, chief strategy officer and AI lead at law firm Mishcon de Reya.
“If enterprises are already adopting Claude or are willing to do so . . . it could meaningfully compress pricing against, and reduce demand, for legal AI tools,” he said.
Rival makers of AI legal tools such as Harvey and Legora said they also used Claude, but neither was planning to include Anthropic’s plug-in within their offerings as their existing tools performed better.
Winston Weinberg, Harvey’s co-founder and chief executive, said it had “always been super vocal that we believe our long-term largest competition will be the model providers” rather than other legal tech.
Max Junestrand, Legora’s CEO, said there remained a big distinction between Anthropic’s plug-ins and its more dedicated platform trained to meet an individual law firm’s specific needs.
According to a Barclays survey of buy-side investors published on Monday, advertising agencies were more exposed to AI developments, with WPP, Omnicom and Publicis ranked among their top “AI losers”.
Recommended
Analysts said this was because sales and marketing departments could develop their own tools using Claude, creating a higher risk to the advertising industry than legal services.
Others warned that new challenges will emerge from the rise of AI-generated code.
AI models still frequently “hallucinate”, or fabricate things. Experts warned of a looming “comprehension debt”, where junior coders became too reliant on AI tools and lost the ability to understand where things have gone wrong.
AI-generated errors could also have serious consequences in companies where accuracy is crucial, such as the highly regulated industries of banking and law.
Mishcon de Reya’s West said: “People will be testing to see whether the eye-catching demos turn out to be marketing fluff or evidence of high-quality repeatable outcomes when handling messy real-world contracts at scale.”

