washington — The Pentagon gave Anthropic an ultimatum this week: give the U.S. military unrestricted use of its AI technology or face a ban on all government contracts.
At the heart of the issue is who controls how artificial intelligence models are used: the Department of Defense or company CEOs.
Department of Defense AI contract
In July, the Department of Defense awarded Anthropic a $200 million contract to develop AI capabilities to advance U.S. national security.
Anthropic’s rivals, including OpenAI, Google, and xAI, also won $200 million in contracts from the Department of Defense last year.
Anthropic is currently the only AI company to deploy its models to the Department of Defense’s classified networks through a partnership with data analytics giant Palantir.
A senior Pentagon official told CBS News that Grok, owned by Elon Musk’s xAI, has agreed to use it in classified environments, and other AI companies have done the same.
The Pentagon announced last month that it was considering accelerating the use of AI, saying the technology could help the military “rapidly transform intelligence data” and “improve warfighter lethality and efficiency.”
Collision over the guardrail
The conflict between the Pentagon and Anthropic was reportedly sparked by the U.S. military’s use of its technology, known as Claude, during the war. capture operation Former Venezuelan President Nicolas Maduro in January.
Antropic has repeatedly asked the Pentagon to agree to certain guardrails, including limits on using Claude to conduct mass surveillance of Americans, officials told CBS News.
The company also wants to ensure Claude is not used by the Pentagon to make final targeting decisions in military operations without human involvement, one of the sources said. Claude is not immune to hallucinations, nor is he reliable enough to avoid fatal mistakes, such as unintended escalations or mission failures without human judgment, sources said.
Asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance or the use of autonomous weapons. The Pentagon just issued a lawful order.”
Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could impede important actions, such as responding to an intercontinental ballistic missile fired toward the United States.
Restrictions imposed by companies “could create a dynamic where we start using those models, we get used to how they work, and when we need to use them in an emergency situation, we can’t use them,” Emil Michael, the undersecretary of defense for research, said at a February event.
On the question of who is responsible if AI is used to attack or kill military targets and makes a mistake, the military or the AI companies, one defense official said, “The legality is up to the end user, the Department of Defense.”
Top leader’s statement
Anthropic CEO Dario Amodei I spoke out It has voiced concerns about the potential dangers of AI and centered its brand on safety and transparency.
In a lengthy essay last month, Amodei warned about the potential for the technology to be misused, writing: “Powerful AI that monitors billions of conversations by millions of people could gauge public sentiment and detect disloyal groups as they form and eradicate them before they spread.”
“While democracies typically have safeguards in place to prevent military and intelligence agencies from turning inward against their own citizens, the very small numbers of people needed to operate AI tools can circumvent these safeguards and the norms that underpin them. It is also notable that some of these safeguards are already being eroded over time in some democracies,” he wrote.
Mr. Amodei has long supported what he calls “smart AI regulation.” This includes rules requiring AI companies to be transparent about the risks posed by their models and the steps they take to mitigate them.
The Trump administration, on the other hand, favored a lighter touch, arguing that strict AI regulations could stifle innovation and make it difficult for the U.S. AI industry to compete. The administration is tried to block This is so-called “excessive” state-level regulation. At one point last year, venture capitalist and White House AI and crypto advisor David Sachs accused Anthropic of “fear-mongering” and suggested that its interest in regulating AI was self-serving.
Secretary of Defense Pete Hegseth derided this in a speech in January as a “social justice injection that constrains and confuses the use of this technology.”
Hegseth declared, “We will not adopt an AI model that will not lead to war.” “We judge AI models by this criterion alone: factually accurate, mission-relevant, and free of ideological constraints that limit legitimate military use. The Department of the Army’s AI will never wake up. That works for us. We’re building weapons and systems for war, not chatbots for Ivy League faculty lounges.”
What’s next in the story of humanity vs. the Pentagon?
Hegseth gave humanity until friday If it does not agree to allow the U.S. military to use its technology unrestricted, it risks being blacklisted, sources familiar with the situation told CBS News.
Pentagon officials are considering invoking the Defense Production Act to force Anthropic into compliance on national security grounds.
Alternatively, if no deal is reached, defense officials are considering labeling the company a “supply chain risk” and forcing it out of government service.
