Lauri Sulonen, head of financial planning at Finnish mobile gaming company Supercell, used to be sceptical about how much artificial intelligence would transform his job. “I thought there is so much context you need to know and people you need to talk to to get stuff done . . . that is hard for AI.”

But when he assigned an AI-powered “analyst agent” to produce the monthly performance report that typically took his team three hours, it was done in five minutes. Sulonen says the AI made no mistakes, the quality was good and it provided references to check the numbers. “I was pretty bearish before starting this . . . I’ve changed my assumptions.”

His AI partner for this task was Pigment, a specialist business planning platform based in France that carried out the “basic repetitive work that’s the least interesting for us, but most prone to human error”.

While some companies are still experimenting, a growing number are finding that routine analytical work — from forecasting to financial modelling — as well as research and drafting content, can now be done almost instantly by software agents. Many have already introduced specific tools that have transformed the work of professionals in their industry — Harvey in legal services, Writer for corporate communications, Synthesia for training content and Intercom’s Fin for customer support, for example. Some companies are building their own specialised tools in-house.

But an announcement last week from AI company Anthropic delivered a warning shot on how the future might look for AI in the workplace. The company unveiled a range of new tools that can be customised to specific industries such as law, finance, sales, marketing and customer support, and are able to carry out white-collar tasks with little human oversight.

The launch sparked worries among investors who had been betting on more industry-specific AI developments, which they feared suddenly looked more vulnerable. A similar trend was evident in the wealth management sector, where the share prices of several companies fell last week over concerns about potential disruption from a new AI-led investment tool.  

The Anthropic announcement rippled through corporate offices, prompting employees who are already using customised AI tools for more menial tasks to investigate the alternatives, and assess whether they increase the risk that their job could be replaced.

Until now companies such as Anthropic and OpenAI have built large AI models that essentially act as the foundations for developers and companies to then produce more specialist tools for lawyers, bankers, consultants and other professionals. Goldman Sachs recently announced it was working with Anthropic on an AI agent to automate roles at the bank. The company says Uber, Netflix, Salesforce and Allianz also use its models in a similar way.

Anthropic’s new tools, released under its Claude Cowork platform, offer businesses a single agentic platform, which some think could eliminate the need for multiple specialist subscriptions or expensive in-house development, and potentially broaden productivity savings.

They use customisable “plug-ins” for company-specific AI processes, such as a tool to automate legal contract reviews, and “subagents” for specific tasks such as data visualisation. The new products are a development of the company’s Claude Code, which uses large language models to generate lines of computer code. “It’s the same powerful agent, but much more accessible,” Guillaume Princen, Anthropic’s head of digital native businesses, told the FT following the launch.

Specialist AI companies have hit back, saying their systems have better checks and balances, audit trails and other safety layers that a generic agent still needs to prove it can match. Bespoke developers say their advantage lies in their ability to turn what was a cumbersome model into something more usable and reliable that can be more easily integrated into existing workflows.

Companies such as advertising group Publicis and legal services provider Relx have been among those trumpeting their rapid adoption of technology they have promised to marry with internal data and expertise.

“It’s tempting to assume that increasingly capable general-purpose AI will simply replace sector-specific legal tools,” says Harry Borovick, general counsel and AI governance officer at Luminance, a UK-based AI document review and analysis company. He notes his industry requires systems that are able to operate across complex cross-border, privacy, governance and audit scenarios. “This means consistency and trust are key and that . . . domain-specific tools only increase in value.”

A number of lawyers and legal tech providers have claimed the Anthropic tool is less effective than other available products. Harvey and Legora — the two leading legal AI companies — use models from Anthropic and OpenAI to power their systems but have developed their own tools to run on them.

In a LinkedIn post following Anthropic’s announcement, Legora chief executive Max Junestrand made it clear he did not see the new plug-ins as a threat. “There is an important difference between a plug-in and operating a collaborative . . . production-grade platform used by hundreds of the world’s leading legal teams,” he wrote.

Legal workers who have tested the Anthropic product shared similar views on social media. One criticised the plug-in for using Wikipedia as a source.

Analysts at JPMorgan say Claude Cowork does not change the competitive environment for Relx’s legal service. “Claude Cowork is just catching up with the products already offered by Harvey and Relx, and given the lack of a complete legal library, it seems unlikely it will ever be able to match the full set of agentic solutions offered by Relx.”

LexisNexis’s AI service for legal professionals launched in January, offering a library of hundreds of pre-built, configurable workflows from disputes, case strategy and other legal processes that can be deployed out of the box or tailored using firm-specific guidance.

Analysts say the new tools could pose a higher risk to the advertising industry.

This year’s Super Bowl showcased the extent to which AI is taking over advertising, with Svedka Vodka using it to help create an ad.

Tools that can turn simple text prompts into ads in minutes are already available to the clients of most large advertising groups despite their heavily deflationary effect on a once premium service that is still often paid by the hour. AI assistants can help with targeting, media planning and campaign development.

One advertising executive says the “generic” models being offered by Anthropic could pose more of a threat to specific industry tools, but believes the extensive data and client knowledge within the large agencies gives them an edge in developing more sophisticated advertising campaigns. Large agencies such as WPP already utilise Gemini, OpenAI and Anthropic to provide the intelligence in their in-house models.

A bigger risk is that clients will increasingly do the work themselves, as marketing teams develop their own tools using Claude, which they could then use to produce their own campaigns.

Eléonore Crespo, co-chief executive of Pigment, the platform used by Supercell, says specialist AI providers “succeed because they . . . understand unique data structures, integrate into specific workflows, and provide the governance and auditability that highly regulated sectors require”.

While “a generalist model is a compelling, low-friction entry point for experimentation”, she adds, “in practice, we often see that as a stepping stone rather than an endpoint. The reality is that generalists are for play, but specialists are for work.”