Google employees urge CEO to reject ‘inhumane’ use of classified military AI

AI News


More than 600 Google employees have called on the company to reject a potential deal with the Department of Defense that would allow it to use artificial intelligence. secret military operationsaid in a statement on Monday.

“We want AI to benefit humanity, not be used in inhumane or extremely harmful ways,” an open letter to Google CEO Sundar Pichai said. “This includes lethal autonomous weapons and mass surveillance, but extends beyond that.”

The letter, signed by staff from Google DeepMind, cloud and other departments, comes as the tech giant is in talks with the US Department of Defense over its possible use. Gemini AI model In a confidential setting.

It has been publicly signed by more than 20 directors, senior directors, and vice presidents.

“Classified workloads are by definition opaque,” ​​said one organization official, who was not named in the statement.

“Right now, there is no way to ensure that our tools are not used away from public scrutiny to cause grievous harm or erode civil liberties. We are talking about things like personal profiling and targeting innocent civilians.”

related

The letter comes amid growing pressure for technology companies to clarify how their AI tools can be used by the military and intelligence agencies in the wake of a dispute between them. Department of Defense and AI startup Anthropic.

Previously humans sued the US Department of Defense After being classified as a “supply chain risk” following a request that the company’s systems not be used for mass surveillance or autonomous warfare.

Anthropic CEO Dario Amodei said the company “cannot in good conscience comply with the Department of Defense’s request” for unrestricted access to its AI systems.

“We believe that in limited cases, AI could undermine rather than protect democratic values,” Amodei wrote. “Some applications are simply beyond what can be done safely and reliably with today’s technology.”

Following Amodei’s decision, US President Donald Trump ordered government departments to stop using the company’s Claude chatbot.

Google proposed contract language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without proper human control, the letter’s organizers said.

But the Pentagon has called for broader language of “all lawful uses,” arguing it needs to maintain operational flexibility. Employees say such safeguards are difficult to enforce in practice, citing existing Pentagon policies that limit outside control over AI systems.

Recent statements by Google staff draw comparisons to previous employee protests in 2018 that prompted Google to withdraw from Project Maven, a Pentagon initiative that used AI to analyze drone footage.

“We believe Google should not be in the business of war,” the letter said.

“Therefore, we call on Project Maven to be halted and for Google to draft, publish, and enforce a clear policy that neither Google nor its contractors will ever develop warfare technology.”



Source link