On Monday, as Google signed a secret agreement with the Pentagon, becoming the latest player in the artificial intelligence arms race, hundreds of employees of the Silicon Valley giant called on the company’s CEO to prevent the Pentagon from using its AI models for covert operations.
Reuters reports that the $200 million pact includes safety filters that allow the Pentagon to use Google’s AI for “any lawful purpose,” but not for the development of lethal autonomous weapons systems (commonly known as “killer robots”) or for domestic surveillance without human oversight or control.
According to The Information’s Erin Woo, the deal does not give Google “the right to control or veto the government’s legitimate operational decisions.”
The agreement also reportedly requires Google to adjust AI safety settings in response to government requests.
“We’re proud to be part of a broad consortium of leading AI labs, technology companies, and cloud companies that provide AI services and infrastructure in support of national security,” a Google spokesperson told The Information.
More than 600 Google employees, many from the company’s DeepMind AI lab, sent a letter to CEO Sundar Pichai on Monday demanding that the US military be prevented from using the company’s artificial intelligence technology for sensitive projects.
“We want AI to benefit humanity, and we do not want it to be used in ways that are inhumane or extremely harmful,” the letter said, according to the Washington Post. “This includes lethal autonomous weapons and mass surveillance, but it goes beyond that.”
“The only way to ensure that Google does not engage in such harm is to reject classified workloads,” the workers stressed. “Otherwise, such use may occur without our knowledge or authority to prevent it.”
Thousands of AI experts are calling for a moratorium on the development and deployment of advanced AI technologies. But tech companies and military officials argue that if the U.S. doesn’t pursue advanced AI, like the military-industrial complex did with nuclear weapons during the Cold War, rivals like China will do so and leave the U.S. irreparably behind.
The U.S. and allied forces from Israel to Ukraine are using AI to make life-or-death decisions in wartime, including selecting targets for attack with speeds that would have been unfathomable just a few years ago, and the use of such technology has facilitated Israeli massacres in Gaza and Lebanon and U.S. and Israeli killings in Iran.
“Misuse of the technology we play a critical role in building has already cost lives and put civil liberties at risk at home and abroad,” the Google employees’ letter said.
The policies and actions of those in charge of the US government and military are also fueling concerns about the use of AI.
For example, U.S. Secretary of Defense Pete Hegseth has overseen the dismantling of efforts aimed at reducing civilian casualties during wartime, even though experts say hundreds of thousands of people have died in U.S.-led wars this century. Hegseth instead expressed disdain for what he called “stupid rules of engagement” designed to promote “maximum lethality” in the U.S. military and minimize harm to civilians.
Critics say their concerns are justified by actions such as the U.S. cruise missile attack on an Iranian girls’ school that killed 168 children and teachers, and Israeli airstrikes that killed tens of thousands of Palestinian civilians in the Gaza Strip, many using U.S.-supplied bombs.
Companies that clashed with the Trump administration over refusing requests to use military AI also risk being left behind. Anthropic, which created the AI assistant Claude, has lost a $200 million Pentagon contract and is facing a government blacklist and legal battle after the company refused to ease safety restrictions on autonomous weapons and surveillance.
Meanwhile, OpenAI, which develops the generative AI platform ChatGPT, has rewritten its “no military use” policy to allow “national security” uses for its products, opening the door to a lucrative Department of Defense contract.
Not wanting to be left behind when President Donald Trump returned to office last year, Google quietly reversed its pledge not to use artificial intelligence for harmful purposes, marking a complete departure from its long-standing founding motto, “Don’t be Evil,” which it scrapped in 2018.
The Pentagon contract continues, and Google reportedly wants to add $6 billion in AI deals by next year.
Most AI experts agree that the question is not if, but when, artificial intelligence will surpass human capabilities. Experts increasingly view AI as a new emerging species, and prominent industry voices such as philosopher Nick Bostrom, Machine Intelligence Institute co-founder Eliezer Yudkowsky, and the “godfather of AI” Jeffrey Hinton point out that when the goals of the more intelligent species conflict with those of the less intelligent species, the less intelligent species tend to lose, usually with disastrous consequences.
Hinton was so concerned that he left Google in 2023 so he could speak up about the remote but growing risk that AI could one day wipe out humanity.
The odds of AI causing a catastrophic existential outcome known as p(doom) were once something of a joke. AI experts’ p(doom) predictions are now receiving as much attention as weather forecasts and market predictions. Yudkowsky said there is a more than 95% chance of an AI catastrophe.
Hinton, who won the 2024 Nobel Prize in Physics for his work on neural networks, a fundamental technology for AI, is relatively optimistic, saying the probability is between 10% and 20%.
After winning the Nobel Prize, he said, “There are very few instances in which something more intelligent is controlled by something less intelligent.”
