Antropic says the Pentagon has declared it a national security risk.

AI For Business


Anthropic said Thursday that the Pentagon has identified it as a national security threat and banned it from doing business with the U.S. military, a surprising move that could send shockwaves through the U.S. AI industry.

The designation, which the company announced Wednesday it had received, specifically designates Anthropic as a “supply chain risk to national security” and requires the Department of Defense and its contractors to stop using Anthropic’s AI services in all defense operations.

Defense Secretary Pete Hegseth cabled the move to X on Friday night.

This comes after months of tense negotiations over how the military can use Anthropic’s Claude AI system. Generative AI models like Claude’s are a relatively new technology, but they were quickly adopted by the Trump administration, including for military use.

Over the past few months, the Department of Defense has been negotiating new contract terms with Anthropic, along with other major U.S. AI companies, to allow for broader military use of AI. While the Pentagon seeks to use its powerful AI systems for “all lawful uses,” Anthropic CEO Dario Amodei wanted stronger assurances that the Pentagon would not use the company’s AI technology for lethal autonomous weapons or domestic mass surveillance.

Amodei confirmed the supply chain risk labeling in a statement Thursday night, saying the company does not agree with it. “We do not believe this practice is legally sound and see no other option than to challenge it in court.”

“Antropic has more in common with the Department of the Army than differences,” he said in a statement. “We are both committed to advancing U.S. national security and protecting the American people, and we agree that applying AI across government is urgent. All of our future decisions will be based on that common premise.”

Migrate other AI systems as well

Until last week, Anthropic was the only AI company authorized to use its services on the Department of Defense’s classified networks. Hours after Hegseth announced last week that he would seek to classify Anthropic as a supply chain risk, OpenAI CEO Sam Altman announced that the company had reached a new agreement with the Department of Defense to use OpenAI’s services in a classified setting, potentially allowing OpenAI to replace much of Anthropic’s current business with the Department of Defense.

Elon Musk’s xAI and its Grok AI system also signed a deal with the Department of Defense last week to be cleared for use on classified networks.

In a statement posted on its website Thursday evening, Amodei emphasized that Anthropic’s ban on doing business with the military does not apply to contracts with military suppliers for non-defense-related purposes. Anthropic has extensive business deals with many of America’s largest technology companies, including Amazon and Microsoft, many of which also have large contracts with the Department of Defense.

A senior Pentagon official acknowledged that the supply chain risk assessment was effective immediately. “From the beginning, this has been one guiding principle: the military can use technology for any lawful purpose. The military will not allow vendors to enter the chain of command and endanger our warfighters by restricting the lawful use of critical capabilities,” the official told NBC News on Thursday.

In a post announcing the move last Friday, Hegseth wrote that Anthropic “will continue to provide service to the Department of the Army for a period of up to six months to allow for a seamless transition to a better, more patriotic service.”

“Anthropic not only provided a masterclass in arrogance and betrayal, but also a textbook example of how not to do business with the U.S. government and Department of Defense,” he said in the post. “Our position has and will never waver: The Department of the Army must have full and unrestricted access to Anthropic’s models for all lawful purposes in the defense of the Republic.”

Industry concerns about labels

In a statement last week before Hegseth’s announcement during tense contract negotiations, Amodei said the supply chain risk label, typically reserved for foreign adversaries and associated companies, “has never before been applied to a U.S. company.”

Legal watchdogs said the designation is unlikely to have any legal effect and is instead intended to alert other companies to follow Pentagon policy.

The mere threat of such a designation has already rattled Washington and the tech industry. Fearing repercussions from decisions regarding potential supply chain risks, defense experts, Anthropic rival Open AI, and members of Congress have sought to defuse tensions between Anthropic and the Department of Defense throughout this week.

An influential technology advocacy group whose members include Nvidia and Apple sent a letter to Hegseth on Wednesday asking him to refrain from formally applying the supply chain risk label.

Many industry investors are concerned that by targeting one of the nation’s largest and most successful AI companies, the Department of Defense is setting a dangerous precedent that will discourage investment and chill the U.S. AI industry.

Last Friday, just over an hour before the 5 p.m. ET deadline to reach a deal that Hegseth had set earlier in the week, President Donald Trump said he would move to bar Anthropic from other federal agencies.

“The left-wing lunatics at Anthropic made a disastrous mistake in trying to strong-arm the Department of the Army and force it to follow the Code of Service instead of the Constitution,” Trump wrote.

The Department of Defense is already using Anthropic’s Claude system as part of a contract with data analytics company Palantir. According to recent reports in the Washington Post and Wall Street Journal, Anthropic’s AI systems are helping troops evaluate intelligence and identify targets in the ongoing war in Iran. NBC News has not confirmed these reports.

Anthropic signed its first contract with Palantir in 2024, allowing the Department of Defense to use the company’s services on sensitive networks, and won another $200 million contract in July to further “prototype frontier AI capabilities to advance U.S. national security.”

In an early round of negotiations, Anthropic agreed to let the Department of Defense use its AI systems for cyber and missile defense purposes.

“There is essentially no profit.”

Some experts said there is a clear disconnect between labeling one of the largest U.S. AI companies as a supply chain risk to national security and refraining from applying the same label to DeepSeek, a leading Chinese AI company that has been accused of misconduct. DeepSeek did not respond to requests for comment from multiple media outlets on the matter at the time.

“We’re treating American AI companies worse than we treat Chinese Communist Party-controlled AI companies,” said Michael Sobolik, an expert on AI and China issues and a senior fellow at the Hudson Institute. “We cannot prevent the most innovative and successful American companies from asking quintessentially American questions about military applications and privacy.

“The U.S. government risks cutting off one of our nation’s best AI companies early in this AI race,” Sobolik continued. “To do that, when the American frontier model is qualitatively and quantitatively superior to China’s, seems like cutting off one’s nose to disfigure oneself.”

Tim Fist, director of emerging technologies at the Washington-based think tank Institute for Progress, said the new designation would be counterproductive to America’s AI aspirations.

“Supply chain risk designations, typically used against foreign adversaries, are harming one of America’s top AI companies and further discouraging other companies from working with the federal government,” Fist said in written comments. “This designation harms the AI ​​industry and, by extension, provides virtually no benefit to U.S. national security.”



Source link