The Department of Defense is pressuring AI developers to authorize the use of their technology by the U.S. military for all lawful purposes, but Anthropic is resisting, according to a new Axios report.
According to reports, the government has made similar demands to OpenAI, Google, and xAI. An unnamed Trump administration official told Axios that one of the companies agreed, but two others showed some flexibility.
Mr. Antropic, on the other hand, is reportedly the least willing to make concessions. In response, the Pentagon reportedly threatened to suspend a $200 million contract with the AI company.
In January, the Wall Street Journal reported that there were significant disagreements between humankind and Pentagon officials over how to use the Claude model. Later, data emerged regarding the use of Claude in the US operation against Venezuela during the arrest of Nicolas Maduro.
Negotiations take place between Anthropic and the government regarding the use of Claude.
“They did not discuss using the Claude for any particular operation with the War Department.”
An Axios spokesperson said the company “did not discuss the use of the Claude for specific operations with the Department of the Army,” but focused on specific policy use questions, particularly the severe restrictions on fully autonomous weapons and mass surveillance.
“We focused on a specific set of questions regarding policy applications, particularly on fully autonomous weapons and strict limits on mass surveillance.”
At the same time, analysts note that the government continues to discuss further steps regarding Claude’s use in military applications and the developer’s response to requirements for deploying the system in defense and national security.
