Anthropic says it won’t allow the U.S. government unrestricted use of AI

Applications of AI


Anthropic website page for Thursday, February 26, 2026.

AI company Anthropic said Thursday it will not allow the Pentagon to use its technology unrestricted, despite pressure from the Pentagon to comply.

“These threats do not change our position. We cannot in good conscience comply with their demands,” Anthropic CEO Dario Amodei said in a statement.

By Friday, the U.S. government had given artificial intelligence startups consent to unconditional military use of their technology, even if they violated internal ethical standards or could be forced to comply under federal emergency powers.

Amodei said humanoid models are deployed by the Department of Defense and intelligence agencies to protect the country, but there is an ethical line when it comes to mass surveillance of American citizens or their use in fully autonomous weapons.

read more Subscriber only Anthropic, the AI ​​startup taking on Donald Trump

“Using these systems for domestic mass surveillance is incompatible with democratic values,” Amodei said. And cutting-edge AI systems have yet to be trusted to power deadly weapons unless they are ultimately controlled by humans, he added. “We do not knowingly offer products that endanger U.S. warfighters or civilians.”

cold war era law

After meeting with Antropic earlier this week, the Pentagon issued a harsh ultimatum. If it does not agree to unrestricted military use of the technology by 5:01pm (22:01 GMT) on Friday, it faces enforcement under the Defense Production Act.

The Cold War-era law, last applied during the coronavirus pandemic, gives the federal government broad powers to force private industry to prioritize national security needs.

The Department of Defense also threatened to designate Anthropic as a supply chain risk. This designation is typically reserved for companies in hostile countries and can seriously damage a company’s reputation and ability to work with the U.S. government.

The Pentagon has confirmed that Elon Musk’s Grok system has been cleared for use in classified environments, but other contractors OpenAI and Google are said to be nearing similar clearances, increasing competitive pressure on Anthropic to do the same.

Anthropic signed a $200 million contract with these companies last year to supply AI models for various military applications.

read more Subscriber only “While our tech fraternity predicts the end of work as we know it thanks to AI, we struggle to imagine what will happen next.”

The former OpenAI employee founded Anthropic in 2021 on the premise that AI development should prioritize safety, a philosophy that now puts the company on a collision course with the Pentagon and the White House.

“Antropic understands that military decisions are made by the Department of the Army, not by private companies,” Amodei said.

“However, we believe that in limited cases, AI could undermine rather than protect democratic values.”

Le Monde (AFP News)



Source link