WASHINGTON (AP) – Defense Secretary Pete Hegseth gave Anthropic’s CEO a Friday deadline to open up the company’s artificial intelligence technology to unrestricted military use or risk losing government contracts, according to people familiar with the talks.
Mr Hegseth met with Anthropic CEO Dario Amodei on Tuesday. The company, which developed the chatbot Claude, is the last company not to provide its technology to the new internal US military network.
read more: Musk’s Grok chatbot faces EU privacy probe over sexual deepfake images
In addition to terminating the contract, Pentagon officials warned that they could designate Anthropic as a supply chain risk and use the Defense Production Act to essentially give the military more authority to use the product even if the military does not approve of its use, the person said.
The person, who was not authorized to speak publicly about the meeting and spoke on condition of anonymity, said the tone of the meeting was friendly, but Mr. Amodei did not budge on two areas Anthrop had set as non-crossable: fully autonomous military targeting operations and domestic surveillance of American citizens.
The Pentagon had no immediate comment.
Amodei has repeatedly raised ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and AI-assisted mass surveillance that could track dissent.
read more: Experts say President Trump’s use of AI images will further erode public trust
The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.
This highlights the debate over the role of AI in national security and concerns about how it might be used in high-stakes situations involving lethal force, classified information, or government surveillance. This also came alongside Hegseth’s vow to eradicate what he called “woke culture” within the military.
“Powerful AI monitoring billions of conversations by millions of people could measure public sentiment, detect when disloyal groups are forming, and root them out before they spread,” Amodei wrote in an essay last month.
Anthropic is the only AI company approved for classified military networks
Last summer, the Department of Defense announced defense contracts with four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Each contract is worth up to $200 million.
Anthropic is the first AI company approved for classified military networks and works with partners such as Palantir. The remaining three companies are currently operating only in unclassified environments.
read more: Pentagon adopts Musk’s Grok AI chatbot, drawing global outcry
By early this year, Hegseth had only covered two of them: xAI and Google.
In a January speech at Musk’s spaceflight company SpaceX in South Texas, the defense secretary said he was ignoring any AI models that would “not lead to war.”
Hegseth said his vision for military AI systems means they operate “without ideological constraints that limit legitimate military applications,” before adding that the Pentagon’s “AI will never wake up.”
In January, Hegseth announced that Musk’s artificial intelligence chatbot Grok would join the Pentagon’s GenAI.mil network. The announcement comes days after Grok, which is part of Musk’s social media network X, came under global scrutiny for producing highly sexualized deepfake images of people without their consent.
OpenAI announced in early February that it would also join the military’s secure AI platform, allowing military personnel to use a custom version of ChatGPT for unclassified tasks.
Anthropic bills itself as more safety-oriented
Anthropic has long pitched itself as the more responsible and safety-focused of the big AI companies, ever since its founders left OpenAI to found the startup in 2021.
Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technologies, said the uncertainty with the Pentagon is testing those intentions.
“Anthropic’s peers, including Meta, Google, and xAI, are happy to comply with the department’s policy regarding the use of models for all lawful applications,” Daniels said. “The company therefore has limited bargaining power here and risks losing influence in driving the sector’s adoption of AI.”
Amid the AI boom that followed the release of ChatGPT, Anthropic worked closely with President Joe Biden’s administration and volunteered third-party oversight of its AI systems to prevent national security risks.
CEO Amodei warns of the potentially devastating dangers of AI, while rejecting the label of AI as a “destiny.” In a January essay, he said that “we are much closer to a real danger in 2026 than we were in 2023,” but argued that those risks should be managed in a “realistic and pragmatic way.”
Antropic is at odds with the Trump administration
This isn’t the first time Anthropic’s push for stronger AI protections has put it at odds with the Trump administration. Artificial needle chip maker Nvidia has publicly criticized President Trump’s proposal to ease export restrictions to allow some AI computer chips to be sold in China. But the AI company remains a close partner with Nvidia.
The Trump administration and Anthropic are also on opposing sides in lobbying for AI regulation in U.S. states.
In October, David Sachs, President Trump’s top AI adviser, accused Anthropic of engaging in a “sophisticated regulatory acquisition strategy based on fear-mongering.”
Sachs made his remarks about X in response to what Anthropic co-founder Jack Clark wrote about attempting to balance technological optimism with “appropriate fear” over the steady march toward more capable AI systems.
Anthropic hired a number of former Biden officials shortly after Trump returned to the White House, in part to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official during President Trump’s first term, to its board of directors.
The Pentagon-Human debate is reminiscent of the uproar several years ago when some tech workers opposed their companies’ participation in Project Maven, the Pentagon’s drone surveillance program. The Pentagon’s reliance on drone surveillance has only grown as some employees have quit over the project and Google itself has retreated.
Similarly, “the use of AI in the military is already a reality and is not going away,” Daniels said.
Amos To, senior adviser at New York University’s Brennan Center on Freedom and National Security Program, said the Pentagon’s “astounding” deployment of AI shows the need for increased oversight and regulation of AI by Congress, especially when it is used to monitor American citizens.
“The law has not kept up with the speed at which technology is evolving,” To wrote in a post on Blue Sky. “But that doesn’t mean the Department of Defense has a blank check.”
O’Brien reported from Providence, Rhode Island.
Freedom of the press is the cornerstone of a healthy democracy.
We support trustworthy journalism and civil dialogue.
