Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Anthropic will stop selling artificial intelligence services to groups majority owned by Chinese entities, in the first such policy shift by an American AI company.
The San Francisco-based developer of Claude AI is trying to limit the ability of Beijing to use its technology to benefit China’s military and intelligence services, according to an Anthropic executive who briefed the Financial Times.
The policy, which takes effect immediately, will potentially apply to Chinese companies from ByteDance and Tencent to Alibaba.
“We are taking action to close a loophole that allows Chinese companies to access frontier AI,” said the executive, who added that the policy would also apply to US adversaries including Russia, Iran and North Korea.
The executive said the policy was designed “to align with our broader commitment that transformational AI capabilities advance democratic interests in US leadership in AI”.
The shift reflects rising concerns in the US about Chinese groups setting up subsidiaries abroad in an effort to conceal their attempts to obtain American technology.
Direct customers and groups that access Anthropic’s services via cloud services will also be affected. The executive said the impact on Anthropic’s global revenues would be in the “low hundreds of millions of dollars”.
He said Anthropic understood that it would lose some business to rivals, but said the company felt that the move was necessary to highlight that the issue was a “significant problem”.
It comes as concerns rise in the US about China using AI for military purposes ranging from hypersonic weapons to nuclear weapons modelling.
Chinese start-up DeepSeek sent shockwaves through the AI industry earlier this year when it released its open-source R1 model, which is considered comparable to leading US models. OpenAI later said it had evidence that DeepSeek had accessed its models inappropriately to train R1. DeepSeek has not commented on the claims.
The Biden administration imposed sweeping export controls in an effort to make it harder for China to obtain American AI. The Trump administration has so far implemented almost no new controls as President Donald Trump tries to secure a meeting with China’s President Xi Jinping.
One person familiar with the situation said the policy was partly aimed at the growing number of Chinese subsidiaries in Singapore that companies on the mainland are using to access US technology with less scrutiny.
It reflects the fact that groups in China must share data with the government when asked, posing a national security risk to the US. It also points to concerns about China appropriating American AI technology in ways that give it a commercial advantage over AI groups in the US.
“This move could potentially impact companies like ByteDance, Alibaba and Tencent,” said one person familiar with the situation.
Anthropic was founded in 2021 by former OpenAI employees who wanted to prioritise AI safety. The company on Tuesday announced that it had raised $13bn in fresh funding, valuing it at $170bn.
Recommended
Earlier this year, Anthropic chief executive Dario Amodei advocated for strengthening export controls on China. Anthropic’s main rival, OpenAI, has also offered support for controls to “protect” the US’s lead in AI.
Access to US chatbots — such as Claude, OpenAI’s ChatGPT, Google’s Gemini and Meta’s AI — is banned in China. But users can access the technology by using virtual private networks, which is against the platforms’ terms of service.