listen to this article
Estimated 5 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations may occur. We work with our partners to continually review and improve our results.
President Donald Trump announced Friday that he will order all federal agencies to stop using the company’s technology following an unusually public spat between the company and the Pentagon over its use of artificial intelligence in surveillance and autonomous weapons.
President Trump’s comments came just over an hour before the Pentagon’s deadline for Anthropic to allow unrestricted military use of artificial intelligence technology. It comes nearly 24 hours after CEO Dario Amodei said the company “cannot in good conscience comply” with the Pentagon’s request.
The president said most government agencies must immediately stop using Anthropic technology, but gave the Pentagon a six-month period to phase out technology already incorporated into military platforms.
“I don’t want it. I don’t want it. I won’t do business with you again!” Trump wrote.
Antropic did not immediately respond to a request for comment on Trump’s remarks.
At issue in the defense contract were conflicts over the role of AI in national security and concerns about how increasingly sophisticated machines would be used in high-stakes situations such as lethal force, classified intelligence, and government surveillance.
The move could benefit Elon Musk’s rival chatbot Grok, which the Pentagon plans to give access to classified military networks, and could also serve as a warning to two other competitors, Google and OpenAI, which also have contracts to supply AI tools to the military.
If Amodei does not relent, military officials said they would not only cancel the contract with Anthropic, but also “consider it a supply chain risk” that could derail important partnerships between the company and other companies. This designation is typically stamped on foreign adversaries. President Trump made no such designation in Friday’s announcement, but said Anthropic could face “significant civil and criminal consequences” if it is not served during the phase-out period.
And if Mr. Amodei were to relent, he could lose credibility in the burgeoning AI industry, especially among the top talent drawn to the company by its promise to build AI responsibly, which without safeguards could pose catastrophic risks.
Anthropic said it asked the Pentagon for limited assurances that Claude would not be used for mass surveillance of Americans or fully autonomous weapons, which is contrary to the company’s use policy for all users of its technology.

But months of private discussions have turned into a public debate, the paper said. Thursday’s statement The new contract language was “framed as a compromise, combined with legal language that allows for free disregard of these safeguards.”
It came after top Pentagon spokesperson Sean Parnell posted on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal), nor does it want to use AI to develop autonomous weapons that operate without human involvement.”
He emphasized that the Pentagon wants to “use Anthropic’s models for any lawful purpose,” but he and other officials did not provide details on how they hope to use the technology.
Amid the rapid global advancement and deployment of artificial intelligence technology, the federal government has invested millions of dollars to combine the wisdom of three existing laboratories into one that can monitor potential dangers in the future.
Parnell also argued that opening up the use of the technology would prevent the company from “jeopardizing critical military operations.”
“We do not allow any company to dictate the terms of how we make operational decisions,” Parnell wrote.
Reaction from the tech industry
Emile Michael, the Under Secretary of Defense for Research and Engineering, later slammed Amodei, claiming that X “has a God complex” and “wants nothing more than to seek personal control of the U.S. military and is willing to jeopardize the security of our nation.”
That message didn’t resonate in many parts of Silicon Valley, where a growing number of technology officials from Anthropic’s biggest rivals, OpenAI and Google, expressed support for Amodei’s position in an open letter late Thursday.
OpenAI and Google, along with Elon Musk’s xAI, have a deal to supply their AI models to the military.

Musk on Friday endorsed the Trump Republican administration on his social media platform X, saying “humanity hates Western civilization” after Michael called attention to an earlier version of Claude’s Guiding Principles that encouraged “considering non-Western perspectives.”
In a surprise move by one of Amodei’s most powerful rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic in an interview with CNBC, questioning the Pentagon’s “threatening” move and suggesting that OpenAI and much of the AI field share the same red line.
Mr. Amodei previously worked at OpenAI until leaving in 2021 to found Anthropic with other OpenAI leaders.
“Despite my differences with Anthropic, I pretty much trust them as a company and I think they really care about safety,” Altman told CNBC. “I’m glad they’re supporting our fighters. We don’t know what’s going to happen.”

