The Pentagon formally notified senior leaders across the U.S. military to remove Anthropic’s artificial intelligence products from their systems within 180 days, according to an internal memo obtained by CBS News.
The memo is dated March 6, the day after the Pentagon officially announced it. specified Artificial supply chain risk. The document, distributed to senior executives on Monday, argues that Anthropic’s AI “poses unacceptable risks to the supply chain when used in any sector.” [Department of War] systems and networks. ”
The document, signed by Kirsten Davis, the Pentagon’s chief information officer, represents the latest salvo in the escalating feud between the Trump administration and Antropic. The notice identifies extensive steps military commanders should take to remove human AI from key national security systems, including nuclear weapons, ballistic missile defense, and cyber warfare.
It also required other companies that do business with the Department of Defense to stop using any Anthropic products in their work related to Department of Defense contracts within 180 days.
Davis warned in the memo that adversaries “could exploit vulnerabilities” in the Pentagon’s daily operations that, if exploited, could pose “potentially catastrophic risks to warfighters.” Davis said she is the only one who can make an exception.
“Exemptions will only be considered for mission-critical activities in direct support of national security operations for which no viable alternatives exist, and the requesting component must submit a comprehensive risk mitigation plan for approval,” she wrote.
A senior Pentagon official confirmed the memo’s authenticity.
Anthropic did not immediately respond to a request for comment.
The federal government’s action is said to be unprecedented and marks the first time a U.S. company has been designated as a supply chain risk. During President Trump’s first term, the government Similar measures were taken to limit Foreign companies such as Chinese telecommunications giant Huawei.
it comes after a stalemate Antropic’s request Two “red lines” that clearly prevent the U.S. military from using the Claude Model to conduct mass surveillance of American citizens or develop fully autonomous weapons.
“We believe that crossing that line goes against American values, and we wanted to defend American values,” Anthropic CEO Dario Amodei told CBS News.
pentagon said before Anthropic wanted to be able to use Claude for “all lawful purposes” without restrictions, arguing that the uses of AI it was concerned about were already prohibited. According to , the Claude is currently being used by the US military in the war against Iran. Source familiar with military use of AI.
Anthropic is currently the only AI company whose models are deployed on sensitive Department of Defense systems. After negotiations between the two sides broke down last month, OpenAI, the creator of ChatGPT, one of Anthropic’s biggest rivals, announced it had signed a contract with the Department of Defense.
monday, humanity filed two lawsuits against the federal governmentIt argued that Pentagon officials’ decision to label the company as a supply chain risk amounted to unlawful retaliation.
“The Constitution does not authorize the government to use great power to punish corporations for protected speech,” the company said in its lawsuit. “There is no federal law authorizing the actions taken here.”
White House Press Secretary Liz Houston responded to the lawsuit “We will never allow the far-left woke corporations to jeopardize our national security by determining how the world’s greatest and most powerful military operates,” President Trump said.
A person with direct knowledge of Claude’s military capabilities told CBS News that the primary task Claude undertakes for the military is sifting through large volumes of intelligence reports, synthesizing patterns, summarizing findings and uncovering relevant information faster than human analysts.
“The military is currently processing about 1,000 potential targets a day, striking the majority of them, and the time between attacks can be less than four hours,” said Mark Montgomery, a retired Navy admiral who is now a senior director at the Foundation for Defense of Democracies. “Humans still know the information, but AI is now performing tasks that previously took days to analyze, and at a scale unmatched by previous campaigns.”
