Clash between Anthropic and federal government

WASHINGTON, D.C. – Anthropic Public Benefit Corporation, one of the world’s leading artificial intelligence companies, has been designated a “supply chain risk to national security” by Secretary of War Pete Hegseth.

The backstory:

The conflict between the Department of Defense and Anthropic centers on the military’s demand for unrestricted access to Anthropic’s artificial intelligence system, nicknamed Claude. 

Anthropic, based in San Francisco, was formed as a Public Benefit Corporation which must be committed to the responsible development of AI.

Dig deeper:

Anthropic is refusing to allow the government to use their system for creating and operating autonomous weapons, as well as for mass surveillance of Americans.

Ann Skeet, Senior Director of the University of Santa Clara’s Center for Applied Ethics, said Anthropic claims Claude has a built-in directive to make ethical decisions. 

“I think what [they’re] saying is that the models are not capable enough right now to support those uses safely, particularly the autonomous weapons. I think that’s what the company is trying to say: can we just slow down and make sure we’re doing the right thing here?” Skeet said.

However, that hesitation has prompted Hegseth to declare Anthropic a supply chain risk to the government, a status usually aimed at companies of foreign adversaries.

The designation blacklists Anthropic from the entire U.S. defense system and likely, the broader federal government.

What they’re saying:

Long-time tech expert Larry Magid said Anthropic is doing what it should.

“If you have a product capable, even if it is unlikely, but capable of doing enormous harm, you don’t want to put that product into a situation where it can do that harm,” Magid said.

AI is far from perfect or always correct, and Magid said extra caution must be taken when applying the technology to weapons. 

“Anybody who knows anything about AI knows that once you essentially put a gun in its hands, you run the risk that it might shoot that gun and that there is a possibility that it could make a mistake,” said Magid.

Additionally, mass surveillance – which is useful in military conflict zones – could be highly intrusive in American society. 

“I don’t want technology used by a federal government to have mass surveillance on American citizens,” said Representative Ro Khanna (D-Santa Clara).

Though Anthropic has some months to disengage with the U.S. government, intermediaries are trying to settle the differences between the two parties.

The Source: Original reporting by Tom Vacar of KTVU

TechnologySan FranciscoPresident