Anthropic’s $200 million contract with the Department of Defense has been in limbo since Anthropic reportedly raised concerns about the Pentagon’s use of its Claude AI model during the attack on Nicolas Maduro in January.
“The Department of the Army’s relationship with Anthropic is being reviewed,” Pentagon Chief Spokesman Sean Parnell said in a statement. luck. “Our nation requires our partners to be willing to help our warfighters win any battle. Ultimately, this is about the safety of our troops and the American people.”
Tensions have escalated in recent weeks after a senior anthropology official reportedly questioned how Claude was used in the attack to Palantir executives. The Hill. Palantir executives interpreted the communication as disapproval of the model’s use in the attack and forwarded details of the exchange to the Pentagon. (President Trump said the military used “dismantler” weapons that rendered enemy equipment “inoperable” during the raid.)
“Anthropic does not consult with the Department of the Army regarding the use of the Claude for specific operations,” an Anthropic spokesperson said in a statement. luck. “Additionally, we have not discussed this matter or raised any concerns with our industry partners outside of day-to-day discussions about strictly technical issues.”
At the heart of this debate are contractual guardrails governing how AI models can be used in defense operations. Anthropic CEO Dario Amodei has consistently advocated for strict limits on the use and regulation of AI, even admitting that it has become difficult to balance safety and profit. In recent months, the company and the Pentagon have been engaged in controversial negotiations over how the Claude could be used in military operations.
Under the Pentagon’s contract, Antropic does not allow the Pentagon to use its AI models for mass surveillance of U.S. citizens or to use its technology in fully autonomous weapons. The company also banned the use of its technology in “lethal” or “kinetic” military applications. Direct involvement in active gunfire during the Maduro attack would likely violate these conditions.
Among AI companies contracting with the government, including OpenAI, Google, and xAI, Anthropic occupies an advantageous position with Claude as the only large-scale language model licensed on the Department of Defense’s classified network.
This position was emphasized in Anthropic’s statement. luck. “Claude is used for a variety of intelligence-related use cases across government, including the DoW, in accordance with our usage policy.”
The company is “committed to leveraging frontier AI in support of U.S. national security,” the statement said. “We continue this work and are having productive conversations with the DoW in good faith about how to resolve these complex issues.”
Palantir, OpenAI, Google, and xAI did not respond to requests for comment.
AI goes to war
The Pentagon is accelerating its efforts to integrate AI into its operations, but xAI is the only company that allows the Pentagon to use its models for “all lawful purposes,” while other companies maintain usage restrictions.
Amodei has been sounding the alarm about user protection for months, offering Anthropic as a safety-first alternative to OpenAI and Google in the absence of government regulation. “I’m very displeased that some companies have made these decisions,” he said in November. Although Anthropic was rumored to be planning deregulation, the company now faces the possibility of being completely excluded from the defense industry.
A senior Department of Defense official said. Axios Defense Secretary Pete Hegseth is “closer” to removing Anthropic from the military’s supply chain, forcing anyone wishing to do business with the military to cut ties with the company.
“It will be very painful to untangle them. We will make sure that they pay the price for forcing our hand in this way,” a senior official told the outlet.
Being considered a munitions risk issue is a special designation typically reserved for foreign adversaries. The closest precedent is the government’s ban on Huawei in 2019, citing national security concerns. In the case of Anthropic, a source said: Axios Defense officials say they had been trying to sell the fight to the San Francisco-based company for some time.
The Pentagon’s comments are the latest in a simmering public debate. The government argues that forcing companies to impose ethical limits on their models is unnecessarily restrictive and leaves the technology useless because there are so many gray areas. As the Pentagon continues to negotiate with AI subcontractors to expand its use, the public battle has become a proxy skirmish over who gets to decide on the use of AI.
