WASHINGTON (AP) – Defense Secretary Pete Hegseth gave Anthropic’s CEO a Friday deadline to open up the company’s artificial intelligence technology for unrestricted military use or risk losing government contracts, according to people familiar with Tuesday’s talks.
Anthropic has developed the chatbot Claude and is the last company not offering its technology to businesses. New US military internal network. CEO Dario Amodei has repeatedly made his thoughts clear. ethical concerns Regarding unchecked use of AI by the government, The dangers of fully autonomous armed drones and AI-assisted mass surveillance that can track dissent.
Defense officials warned that they could designate Anthropic as a supply chain risk or use the Defense Production Act to essentially give the military more powers even if it does not approve how its products are used, according to people familiar with the talks and a senior Pentagon official. Both were not authorized to comment publicly and spoke on condition of anonymity.
The development, previously reported by Axios, highlights debate over the role of AI in national security and concerns about how the technology will be used. high-stakes situation Involves deadly force, classified information, or government surveillance. That is also what Hegseth vowed. eradicating what he calls “woke culture” In the army.
“Powerful AI monitoring billions of conversations by millions of people could gauge public sentiment, detect when disloyal groups are forming, and root them out before they spread,” Amodei wrote in an essay last month.
The person said the atmosphere of the talks was friendly, but that Mr. Amodei did not budge on two areas that Anthrop had set as red lines: fully autonomous military targeting operations and domestic surveillance of American citizens.
A senior Pentagon official said the Pentagon opposes ethics restrictions on Anthropic because military operations require tools without built-in restrictions. The official insisted that the Pentagon only issued a lawful order and emphasized that it is the military’s responsibility to use Anthropic’s tools legally.
Anthropic will no longer be the only AI company approved for classified military networks
Last summer, the Department of Defense announced defense contracts with four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Each contract is worth up to $200 million.
Anthropic is the first AI company approved for classified military networks and works with partners such as Palantir. Musk’s xAI company, which operates the Grok chatbot, said Grok is ready to be used in classified settings, according to a senior Pentagon official.
The person noted that other AI companies are also “approaching” that milestone. Mr. Musk’s spaceflight company SpaceX, which recently merged with xAI, did not immediately respond to a request for comment Tuesday.
In a January speech at SpaceX in South Texas, Hegseth said he was ignoring any AI model that “doesn’t lead to war.”
Hegseth said: Vision of military AI systems This means they operate “without ideological constraints that limit legitimate military use,” adding that the Pentagon’s “AI will not wake up” after that.
The Secretary of Defense said: That Grok It will join the Department of Defense’s secure but unclassified AI network called GenAI.mil. The announcement comes days after Grok, which is part of Musk’s social media network X, drew global scrutiny. Generate highly sexualized deepfake images of people without their consent.
OpenAI announced in early February that it would also join GenAI.mil, allowing service members to use a custom version of ChatGPT for unclassified tasks.
Anthropic bills itself as more safety-oriented
“In line with what our model can do reliably and responsibly, we have continued to discuss our usage policies in good faith to ensure that Anthropic continues to support the government’s national security mission,” Anthropic said in a statement after Tuesday’s meeting.
Since its founding, Anthropic has long touted itself as the more responsible and safety-focused of the big AI companies. Founders quit OpenAI and founded startup in 2021.
Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technologies, said the uncertainty with the Pentagon is testing those intentions.
“Anthropic’s peers, including Meta, Google, and xAI, are happy to comply with the department’s policy of using the model for all lawful applications,” Daniels said. “The company therefore has limited bargaining power here and risks losing influence in driving the sector’s AI adoption.”
in AI trends This follows the release of ChatGPT, in which Anthropic worked closely with President Joe Biden’s Democratic administration to voluntarily participate in third-party monitoring of its AI systems to prevent national security risks.
CEO Amodei warned: The potentially devastating dangers of AI While rejecting the label that he is an AI “ruiner”. In an essay in January, he argued that “we are much closer to real danger in 2026 than we were in 2023,” but that those risks should be managed in a “realistic and pragmatic way.”
Antropic is at odds with the Trump administration
This isn’t the first time Anthropic’s push for stronger AI protections has put it at odds with President Donald Trump’s administration. Nvidia, the chipmaker with the human needle, publicly criticizes President Trump’s proposal to ease export restrictions to the US Allow some AI computer chips to be sold in China. But AI companies still Close partnership with Nvidia.
President Trump’s Republican administration and Anthropic are also on opposing sides in lobbying for AI regulation in US states.
In October, President Trump’s top AI adviser, David Sachs, accused Anthropic of engaging in a “sophisticated regulatory acquisition strategy based on fear-mongering.”
Sachs responded with an X to Anthropic co-founder Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” over the steady march towards more capable AI systems.
Anthropic hired a number of former Biden officials shortly after Trump returned to the White House, in part to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official during President Trump’s first term, to its board of directors.
Amos To, senior adviser at New York University’s Brennan Center on Freedom and National Security Program, said the Pentagon’s “astounding” deployment of AI shows the need for increased oversight and regulation of AI by Congress, especially when it is used to monitor American citizens.
“The law has not kept up with the speed at which technology is evolving,” To wrote in a post on Blue Sky. “But that doesn’t mean the Department of Defense has a blank check.”
___
Mr. O’Brien reported from Providence, Rhode Island.
