STORY: Anthropic says it won’t comply with Department of Defense request to remove safeguards from AI systems.That’s despite threats to label the company a “supply chain risk” and remove it from Pentagon systems, jeopardizing multimillion-dollar contracts.The dispute stems from the AI startup’s refusal to lift safeguards that prevent its technology from being used to autonomously target weapons and conduct surveillance in the United States.:: fileAnthropic CEO Dario Amodei emphasized in a statement Thursday that the company opposes the use of its AI models for mass surveillance in the country.He also said that “Frontier AI systems simply aren’t reliable enough to power fully autonomous weapons.” Earlier in the day, Pentagon spokesperson Sean Parnell told X that the Pentagon has no interest in using AI to conduct mass surveillance of American citizens…Nor do they want AI to be used to develop autonomous weapons that operate without human intervention.He said they are seeking “to allow the Department of Defense to use Anthropic’s model for any lawful purpose.”Parnell said the company needs to make a decision by 5:01 p.m. ET on Friday.Anthropic, which is backed by Google and Amazon, has a contract worth up to $200 million with the division.More than 200 Google and OpenAI employees supported the company’s position in an open letter.Neither company responded to requests for comment.