Federal judge temporarily blocks Pentagon from exposing AI company Anthropic to supply chain risks

AI For Business


SAN FRANCISCO (AP) – A federal judge has ruled in favor of artificial intelligence company Anthropic, temporarily blocking the Department of Defense from designating it as a supply chain risk.

U.S. District Judge Rita Lin said Thursday that she will also block enforcement of President Donald Trump’s social media directive that ordered all federal agencies to stop using Anthropic and its chatbot Claude.

Lin said the “broad punitive actions” taken by the Trump administration and Secretary of Defense Pete Hegseth against AI companies appear arbitrary and capricious and could “artificially cripple” Hegseth’s use of unusual military powers that have traditionally been directed against foreign adversaries.

“There is nothing in the provision that supports the Orwellian idea that U.S. companies that express disagreement with the government can be branded as potential adversaries and obstructionists of the United States,” Lin said.

Lin’s ruling followed a 90-minute hearing in San Francisco federal court on Tuesday in which he questioned why the Trump administration took the unusual step of punishing Anthropic after negotiations over a defense contract stalled over the company’s efforts to block its AI technology from being used in fully autonomous weapons and surveillance of Americans.

Anthropic had asked Mr. Lin to issue an emergency order to remove the stigma it says was unfairly applied to it as part of an “unlawful retaliatory campaign” that led the San Francisco-based company to sue the Trump administration earlier this month. The Pentagon had insisted it should be able to use Claude in any way it deems lawful.

Lin said her ruling was not about public policy debates, but about government actions in response.

“If the concern is the integrity of the operational chain of command, then the Department of the Army should simply stop using Claude. Instead, these measures appear to be aimed at punishing Anthropic,” Lin wrote.

Anthropic has also filed a separate, more limited lawsuit that is currently pending in the Federal Court of Appeals in Washington, DC. The lawsuit involves another rule the Department of Defense uses to designate Anthropic as a supply chain risk.

Lin wrote that while her order is a week late, it does not require the Department of Defense to use Anthropic’s products or prevent it from moving to other AI providers.

Anthropic said in a statement: “We appreciate the court’s swift action and are pleased that the court agrees that Anthropic has a good chance of success on the merits.” The company said the lawsuit was necessary to protect its business and customers, but it remains committed to “working productively with the government to ensure all Americans benefit from safe and reliable AI.”

The Pentagon did not immediately respond to a request for comment on the ruling.

A number of third parties had filed legal briefs in support of Anthropic’s lawsuit, including Microsoft, industry groups, general engineers, retired U.S. military leaders, and a group of Catholic theologians.

O’Brien reported from Providence, Rhode Island.



Source link