Last week, Google quietly abandoned its long-standing commitment to not using artificial intelligence (AI) technology in weapons and surveillance. In an update to AI principles first published in 2018, Tech Giant removed a statement promising not to pursue.
- Techniques that cause or are likely to cause overall harm
- Weapons or other technologies whose primary purpose or implementation is to cause or directly promote injuries to people
- Techniques that collect or use information for monitoring that violates internationally accepted norms;
- Techniques that are violated by purpose are widely accepted by international law and human rights principles.
The update comes after US President Donald Trump revoked former President Joe Biden's executive order aimed at promoting the safe, safe and reliable development and use of AI.
Google's decision follows a recent major technology trend to enter the field of national security and address more military applications of AI. So why is this happening now? And what will happen to the impact of more military use of AI?
Militarized AI Growth Trends
In September, senior Biden government officials met with bosses from major AI companies such as Openai to discuss AI development. The government then announced a task force to coordinate data centre development, considering economic, national security and environmental goals.
The following month, the Biden administration released a memo that partially dealt with “using AI to meet national security goals.”
Large tech companies quickly listened to their messages.
In November 2024, Tech Giant Meta announced that it would make the “Llama” AI model available to government agencies and private companies involved in defense and national security.
This prohibits the use of “Lamas” despite Meta's own policy.[m]Judgment, war, nuclear industry or applications.”
Around the same time, AI company Humanity announced that it would partner with data analytics firm Palantir and Amazon Web Services to make its AI models accessible to U.S. intelligence and defense agencies.
The following month, Openai announced it had partnered with defense startup Anduril Industries to develop US Department of Defense AI.
Companies claim that Openai's GPT-4O and O1 models will be combined with Anduril's systems and software to improve US military defenses against drone attacks.

Michael Dwyer/AP
Advocating for national security
The three companies defended policy changes based on US national security interests.
Take Google. In a blog post published earlier this month, the company cited the Global AI Competition, complex geopolitical landscapes and national security interests as reasons to change AI principles.
In October 2022, the US issued export controls that restricted China's access to certain types of high-end computer chips used in AI research. In response, China has issued its own export control measures on high-tech metals, which are important for the AI chip industry.
Tensions from this trade war have escalated in recent weeks with the release of a highly efficient AI model by Chinese tech company Deepseek. Deepseek is said to have purchased 10,000 NVIDIA A100 chips before US export control measures and used them to develop AI models.
It is not clear how militarization of commercial AI will protect US national interests. But China, the US's biggest geopolitical rival, has clear indications that influence its decision.
It will hit human life very hard
What is already clear is that it demonstrates the use of AI in military contexts being demonstrated in human life.
For example, in the war in Gaza, Israeli forces rely heavily on advanced AI tools. These tools require a huge amount of data and the computing and storage services offered by Microsoft and Google. These AI tools are used to identify potential targets, but are often inaccurate.
Israeli soldiers say these inaccuracies are accelerating the death toll in the war, according to Gaza authorities.

Mohamed Saber/EPA
Google violates international human rights law by removing “harm” clauses from AI principles. This identifies “person security” as an important measure.
It is concerning that commercial tech companies should consider why they should remove harm clauses.
Avoid the risk of AI-enabled war
In its updated principles, Google says its products continue to align with “the widely accepted principles of international law and human rights.”
Nevertheless, Human Rights Watch criticizes the removal of a more explicit statement on weapons development in the original principles.
The organization also points out that Google doesn't exactly explain how its products align with human rights.
This is a rescinding executive order that Joe Biden's AI had been concerned about.
Biden's initiative was not perfect, but it was a step towards establishing a guardrail for the responsible development and use of AI technology.
These guardrails are more necessary than ever as major technology becomes more involved in military organizations, increasing the risks associated with AI-responsive wars and human rights violations.