Axios reports that the Pentagon has threatened to cut ties with artificial intelligence company Anthropic over the company’s restrictions on the use of its AI models by the U.S. military, citing Trump administration officials.
The report said the Pentagon is “disgusted” by Anthropic’s months-long negotiations to oppose the U.S. government’s push for full military use of the AI company’s tools for “all lawful purposes.” This includes use in “the most sensitive areas of weapons development, intelligence gathering, and battlefield operations,” the report said.
Reuters could not confirm the development when it reported on it.
The Wall Street Journal reports that Anthropic’s contract with the Department of Defense is worth about $200 million.
Why would the Department of Defense want complete control over the use and application of AI models?
Two issues for Anthropic are fully autonomous weapons and mass surveillance of American citizens, the magazine added. Notably, the Department of Defense has contracts with Anthropic, Alphabet (Google), OpenAI, and Elon Musk’s xAI.
The source told Axios that there is “a significant gray area in what does and does not apply” to the disputed categories, and that the Pentagon does not intend to negotiate with Anthropic on a case-by-case basis or have its AI model, Claude, unexpectedly block some processes.
As for whether the department could remove the company from its list, the official said, “Everything is on the table…but if we think that’s the right answer, we’re going to need an orderly replacement.”
In a statement to Axios, Anthropic said it “remains committed to using frontier AI in support of U.S. national security.” The company’s usage guidelines state that Claude may not be used to promote violence, develop weapons, or conduct surveillance.
Pentagon versus humanity: Did its use during Maduro’s capture raise concerns?
Reuters reported last month that Anthropic and the Pentagon clashed over safeguards that prevent the government from deploying AI models to autonomously target weapons and conduct surveillance inside the United States.
Notably, the WSJ on February 14 reported that Anthropic’s Claude AI was used by the United States during the operation to capture former Venezuelan President Nicolas Maduro. The application was reportedly offered through a partnership between Anthropic and Palantir, whose tools are widely used by the U.S. Department of Defense and federal law enforcement agencies.
An Anthropic spokesperson told the Journal: “We cannot comment on whether Claude or any other AI model has been used in a specific operation, sensitive or otherwise. Anyone using Claude, whether in the private sector or across the government, must follow usage policies that govern how Claude is deployed. We work closely with our partners to ensure compliance.”
Could the US military replace Claude with another AI player?
Other models do not yet have the same network configuration for use in specialized government applications, making a quick replacement difficult, Axios reports. “Other model companies are right behind Claude,” the official said, noting that the report was the first model to make it onto the Pentagon’s classified network.
Additionally, ChatGPT (OpenAI), Gemini (Google), and xAI (Grok) are all used in uncategorized settings. These people have agreed to waive normal safeguards for cooperation with the Pentagon, and negotiations are underway to move them into sensitive areas, the report added. As for whether they had agreed to the “all lawful purposes” condition, the official said one had agreed and two had “demonstrated more flexibility than humanity.”
In a statement to Axios, an Anthropic spokesperson reiterated the company’s commitment to national security: “As such, we were the first frontier AI company to deploy our models on classified networks, and the first to offer customized models for national security customers.”
Important points
- The Department of Defense is reportedly unhappy with Anthropic’s restrictions on using its AI models in military applications.
- Anthropic’s claims about restrictions on military applications, particularly autonomous weapons, are inconsistent with the Department of Defense’s requirements.
- Anthropic’s $200 million contract could be affected over the company’s opposition to autonomous weapons and domestic surveillance.
