Remaining big tech companies rally to seal Pentagon deal that Anthropic won’t accept

Applications of AI


While Anthropic took a stance earlier this year to limit how its AI technology is used in sensitive military environments, the rest of Silicon Valley doesn’t seem to be as concerned.

Four other tech companies have also signed agreements with the US Department of Defense (DoD) to deploy advanced AI capabilities to classified military networks for “legitimate operational use.”

Microsoft, Nvidia, Amazon Web Services and startup Reflection AI are the latest companies to sign agreements with the Pentagon, according to a Pentagon press release. They join SpaceX, OpenAI, and Google, bringing the total number of AI companies participating in classified military operations to seven.

“These agreements will accelerate the transformation toward establishing the U.S. military as an AI-first combat force and strengthen the warfighter’s ability to maintain judgment superiority across all domains of warfare,” the press release reads.

The press release added that integrating advanced AI systems into classified networks will “streamline data synthesis, improve situational understanding, and enhance warfighter decision-making in complex operational environments.”

“For more than a decade, AWS has been committed to supporting our nation’s military and ensuring our warfighters and defense partners have access to the best technology at the best price,” AWS spokesperson Tim Barrett told Gizmodo in an emailed statement. “We look forward to continuing to support the Department of the Army’s modernization efforts and build AI solutions that help accomplish its critical missions.”

Microsoft, Nvidia, and Reflection AI did not immediately respond to requests for comment from Gizmodo.

The deal comes amid growing concerns about the use of AI for surveillance and military applications.

Earlier this year, Anthropic, then the only major AI company working with the Pentagon on classified systems, reportedly hit a wall in negotiations with the Pentagon as the agency sought language that would allow Anthropic’s technology to be used for “any lawful purpose.” The biggest issues concerned potential applications related to domestic surveillance and autonomous weapons systems. In both situations, there is an argument that AI could already be legitimately used for those purposes. Laws and how courts interpret them are constantly changing, and there’s also the fact that the United States has very few laws written with AI in mind.

After those negotiations reportedly fell through, the Trump administration designated the company as a supply chain risk. Anthropic subsequently filed two lawsuits against the Department of Defense in response. But since then, President Donald Trump has said his administration has had “very good discussions” with Anthropic and suggested a future agreement to restore the company’s access to Pentagon work could still be “possible.”

Anthropic’s most advanced AI model, Mythos, has an even more complex problem. Although the model has only been released to some organizations, it is reportedly already being tested by the National Security Agency (NSA) to identify cybersecurity vulnerabilities in widely used software, including Microsoft products.

Still, just yesterday, Secretary of Defense Pete Hegseth said at a Senate Armed Services Committee hearing that Anthropic is run by “ideological lunatics who should not be making sole decisions about our actions.” When asked if humans will always be involved, Hegseth avoided giving a direct answer, instead insisting that “we follow the law and humans make decisions.” AI is not like that, he added. the current “Make a fatal decision”

The controversy has some technology companies trying to straddle the fence.

Google entered into the agreement earlier this week even though more than 600 employees, including directors and vice presidents, signed a letter urging CEO Sundar Pichai to refuse to allow Google’s AI models to be used on sensitive military facilities.

Meanwhile, in a blog post announcing the partnership, OpenAI said it would maintain control of its “safety stack” and prohibit the use of its AI for domestic mass surveillance or directing autonomous lethal weapons systems.

The Information reports that Google’s contract includes similar language, but also states that the company “does not have any right to control or veto any lawful operational decisions of the government.”



Source link