Anthrop rejects Pentagon request in AI safeguards dispute

AI Video & Visuals


<span>STORY: Anthropic says it won’t comply with Department of Defense request to remove safeguards from AI systems.</span><span>That’s despite threats to label the company a “supply chain risk” and remove it from Pentagon systems, jeopardizing multimillion-dollar contracts.</span><span>The dispute stems from the AI ​​startup’s refusal to lift safeguards that prevent its technology from being used to autonomously target weapons and conduct surveillance in the United States.</span><span>:: file</span><span>Anthropic CEO Dario Amodei emphasized in a statement Thursday that the company opposes the use of its AI models for mass surveillance in the country.</span><span>He also said that “Frontier AI systems simply aren’t reliable enough to power fully autonomous weapons.” </span><span>Earlier in the day, Pentagon spokesperson Sean Parnell told X that the Pentagon has no interest in using AI to conduct mass surveillance of American citizens…</span><span>Nor do they want AI to be used to develop autonomous weapons that operate without human intervention.</span><span>He said they are seeking “to allow the Department of Defense to use Anthropic’s model for any lawful purpose.”</span><span>Parnell said the company needs to make a decision by 5:01 p.m. ET on Friday.</span><span>Anthropic, which is backed by Google and Amazon, has a contract worth up to $200 million with the division.</span><span>More than 200 Google and OpenAI employees supported the company’s position in an open letter.</span><span>Neither company responded to requests for comment.</span>



Source link