Recent public discussion of artificial intelligence (AI) has been dominated by doomsday scenarios and science fiction predictions of advanced AI systems escaping human control. As a result, when people talk about AI warfare, they tend to think of fully automated “killer robots” on the loose. But Israel's Gaza war makes it clear that much more mundane, and not particularly sophisticated, AI surveillance systems are already being used to unleash dystopian, technology-driven horrors.
As Recent media survey Israeli AI targeting system discoveredlavender” and “gospel” is automating mass murder and destruction across the Gaza Strip. This is the culmination of many AI rights-violating trends, including: Biometric Monitoring System and Predictive policing toolsWe have warned before: the AI-enabled war in Gaza shows the urgent need for governments to prohibit the use of technology that runs counter to human rights, not only in peacetime but also in wartime.
Death from Above: Gaza as a Technological Laboratory
Israel's use of AI in warfare is not new. It has been fighting the Gaza Strip for decades. A testing ground for new technologies and weapons,it is Then sold The 11-day military bombing of Gaza in May 2021 was described by the Israeli Defense Forces (IDF) asThe first AI warIn the current attack on Gaza, we see Israel using three broad categories of AI tools:
- Lethal Autonomous Weapons Systems (LAWS) and Semi-Autonomous Weapons (Semi-LAWS)The Israeli military Pioneer Usage of Remote Controlled Quadcopter Equipped with machine guns and missiles Surveillance, intimidation, murder Civilians seek refuge in tents, schools, hospitals and residential areas in Nuseira refugee camp, Gaza. report Some drones broadcast the sounds of babies or women crying to lure Palestinians into targeting them. Israel has deployed “suicide drones,” automated “robot snipers,” and AI-enabled artillery turrets for years. “Automated Killzone” It has been deployed along the Gaza border and in 2021 also deployed a semi-autonomous military robot called “Jaguar.” Promoted “One of the world's first military robots that can act as soldiers on the border.”
- Facial Recognition Systems and Biometric Surveillance: Israel's ground invasion of Gaza Expanding biometric surveillance Palestinian forces are already deployed in the West Bank and East Jerusalem. The New York Times The Israeli military is using a large-scale facial recognition system in the Gaza Strip to “conduct large-scale collection and cataloguing of Palestinian faces without their consent or knowledge,” the report said. The system uses technology from an Israeli company. Corsite and Google Photos It can spot faces in crowds and even grainy drone footage.
- Automatic target generation system: In particular, Gospel, which generates infrastructure targets, Lavender, which generates individual human targets, and Where's daddy?This is a system designed to track and target suspected militants when they are at home with their families.
LAWS, and to some extent quasi-LAWS, The United Nations condemns AI has been deemed “politically unacceptable and morally abhorrent,” and calls for its ban have grown. The use of AI targeting systems in warfare deserves further attention, given that, coupled with biometric mass surveillance, it demonstrates that technologies that should already be banned in peacetime can have devastating, even genocidal, effects in wartime.
Automating Genocide: The Deadly Consequences of AI in War
While it may seem like shocking new ground at first, the use of targeting systems like Gospel and Lavender is actually just the culmination of other AI systems already in use around the world. Predictive PoliceJust as the Israeli military uses “data-driven systems” to predict who are Hamas operatives and which buildings are Hamas strongholds, law enforcement agencies will use AI systems to predict Hamas strongholds. Children may commit crimes or Gang Memberor where you want to expand Additional police forcesSuch a system It is inherently discriminatory and seriously flawedwith serious consequences for those involved. In Gaza, the consequences could be fatal.
When considering the impact of such a system on human rights, we must consider, first, the consequences when the system fails, and, second, when it works as intended. In both situations, reducing human beings to statistical data points has serious and irreversible consequences for people's dignity, safety, and lives.
When targeting system failures, A key concern is that these systems are built and trained on incomplete data. according to +972 MagazineSurvey ofBecause the training data fed into the system included information about non-combatants in the Hamas government in Gaza, Lavender incorrectly flagged as targets individuals with similar communication and behavior patterns to known Hamas fighters, including police and civil defense personnel, relatives of fighters, and even individuals who merely shared a name with a Hamas operative.
As report by +972 MagazineDespite Lavender's 10% error rate in identifying individuals with Hamas ties, the IDF received full approval to automatically employ the kill lists “as if it were human judgment.” Soldiers reported that they were not required to thoroughly or independently check the accuracy of Lavender's output or intelligence data sources. The only mandatory check before authorizing a bombing was to ensure that the marked target was male, which took approximately “20 seconds.”
There is also no reliable way to test the accuracy of such systems or verify their performance. The process of verifying a link to Hamas is extremely complicated, especially given that the data on which such predictions are based may be flawed. Machine learning systems are known to:Potential for future crimesNot only is data insufficient, with the system relying on proxy data (e.g. data on arrests rather than data on actual crimes committed); It is not as simple as saying that “the more data you have, the more accurate your predictions will be.”“
Beyond the accuracy and lack of human validation of such systems, a more existential concern is how their use fundamentally contradicts human rights and the inherent dignity of human beings that is the source of those rights.This is demonstrated by the fact that Israel's AI targeting system is working as intended. Israel Defense Forces “Right now we're focusing on the ones that will do the most damage,” he said. Soldiers were pressured to bomb more targets every day, and they allegedly used unguided missiles, so-called “dumb bombs,” to target the homes of young militants who were allegedly marked by Lavender. This is a sign that Israel is using AI to Calculate collateral damageshas resulted in the mass murder of Palestinians and a level of destruction not seen since World War II. According to the United Nations.
The use of these AI targeting systems effectively relieves humans of responsibility for life-and-death decisions and seeks to hide naive campaigns of mass destruction and murder behind the pretense of algorithmic objectivity. Systems such as Lavender and Where is Daddy? have no ethical or humane use because they are premised on a fundamental dehumanization of humans. These systems must be banned, and the surveillance infrastructure, biometric databases, and other “peacetime tools” that enable such systems to be deployed on the battlefield must be abolished.
Big Tech's role in brutal crimes
As noted above, surveillance infrastructure developed and deployed in peacetime can easily be repurposed in wartime to enable the worst human rights abuses. This calls into question the role of large technology companies in supplying civilian technologies that can be used for military purposes, in particular the cloud computing and machine learning services offered by Google and Amazon Web Services. Provide to Israel Through Project Nimbusmoreover, Proposed Meta-owned WhatsApp metadata is being used to feed the Lavender targeting system.
By failing to meet their human rights responsibilities and continuing to provide these services to the Israeli government, companies like Google, AWS, and Meta risk becoming complicit in aiding and abetting Israeli military and intelligence services and their alleged atrocities in Gaza.
We cannot allow the development of mass surveillance infrastructure that can be used to generate large numbers of targets, determine a “reasonable” number of civilian casualties, and ultimately abdicate human responsibility for life-and-death decisions. We again call on all governments to prohibit uses of AI that are incompatible with human rights, such as predictive policing, biometric mass surveillance, and target generation systems like Lavender. The systems Israel uses in Gaza, coupled with its mass surveillance laboratories that it has continued to expand over the years, offer a glimpse into an even more dystopian future that must never come to pass.