AI-based mass surveillance at the Paris Olympics – legal scholar discusses security boons and privacy nightmares

Applications of AI


The 2024 Paris Olympics will attract global attention, with thousands of athletes and support staff, as well as hundreds of thousands of spectators from around the world, gathering in France. And it won't just be the eyes of the world that will be watching: artificial intelligence systems will too.

Governments and private companies plan to use advanced AI tools and other surveillance technologies to conduct widespread and continuous surveillance before, during and after the Olympics. The Olympics' world stage and international audience raise security risks so much that in recent years, officials and critics have described the Olympics as “the world's largest security operation outside of war.”

The French government, working hand in hand with the private tech industry, has used a legitimate need for increased security as a justification for deploying technologically advanced surveillance and data collection tools. Surveillance plans to address these risks, including the controversial use of experimental AI video surveillance, are so far-reaching that France has had to change its laws to legalize the planned surveillance.

The plans go beyond new AI video surveillance systems: Downing Street has reportedly negotiated secret interim decrees that will allow the government to significantly ramp up traditional covert surveillance and intelligence gathering tools for the duration of the Olympics, including wiretapping, geolocation, communications, and computer data collection, as well as capturing large amounts of image and audio data.

There is a pair of horizontal metal cylinders in the foreground and a group of people in business suits and military uniforms in the background.
French President Emmanuel Macron inspects security cameras in preparation for the Paris Olympics.
Christophe Petit Tesson/AFP via Getty Images

I am a law professor and lawyer who researches, teaches, and writes about privacy, artificial intelligence, and surveillance. I also provide legislators and others with legal and policy guidance on these topics. Increasing security risks necessitate, and indeed require, increased surveillance. This year, France faces concerns about its security capabilities for the Olympics and credible threats surrounding public sporting events.

However, precautions should be proportionate to the risks. Globally, critics argue that France is using the Olympics to seize surveillance powers, and that the government is using this justification for “exceptional” surveillance to normalize state surveillance of the whole of society.

At the same time, there are legitimate concerns about adequate and effective security oversight. For example, in the United States, people are wondering why Secret Service security oversight failed to thwart the assassination attempt on former President Donald Trump on July 13, 2024.

Large-scale surveillance using AI

Enabled by the newly expanded surveillance law, French authorities have been working with AI companies Videtics, Orange Business, ChapsVision, and Wintics to roll out widespread AI video surveillance. Authorities have used AI surveillance at large concerts and sporting events, such as Taylor Swift concerts and before and after the Cannes Film Festival, as well as in subway and train stations at peak times. French authorities said these AI surveillance experiments were successful and “ready-made” for future use.

The AI ​​software used is typically designed to flag specific events such as changes in crowd size or movement, abandoned objects, the presence or use of weapons, a body on the ground, smoke or flames, certain traffic violations, etc. The goal of a surveillance system is to instantly detect events in real time, such as a crowd surging toward a gate or a person leaving a backpack on a busy street corner, and alert security personnel. Flagging these events seems like a logical and sensible use of technology.

But the real questions about privacy and the law arise from how these systems work and are used. How much and what kind of data should be collected and analyzed to alert on these events? What is the systems' training data, error rates, and evidence of bias or inaccuracy? How is the collected data subsequently used, and who has access to it? There is little transparency to answer these questions. Despite safeguards intended to prevent the use of personally identifiable biometric data, this information may be captured in the training data and the systems may be tuned to use it.

France is supporting these private companies by giving them access to thousands of video cameras already installed across the country, coordinating and leveraging the surveillance capacity of rail companies and transport authorities, and allowing the use of camera-equipped drones, giving them legal permission to test and train their AI software on citizens and visitors.

Legalized Mass Surveillance

Neither the need for nor the practice of government surveillance at the Olympics is new. During the 2022 Beijing Winter Olympics, security and privacy concerns were so high that the FBI urged “all athletes” to leave their personal cell phones at home and use only disposable cell phones while in China due to the extreme surveillance by the Chinese government.

However, France is a member state of the European Union. The EU's General Data Protection Regulation is one of the strongest data privacy laws in the world, and the EU's AI law is leading the effort to regulate harmful uses of AI technology. As an EU member state, France must comply with EU law.

France has legally allowed the expanded use of AI in the surveillance of public places.

In preparation for the Olympics, France enacted Law 2023-380, a set of laws that provide the legal framework for the 2024 Olympics. It includes the controversial Article 7, which allows French law enforcement and its technology contractors to conduct experiments in intelligent video surveillance before, during and after the 2024 Olympics, and Article 10, which explicitly allows the use of AI software to review video and camera feeds. These laws make France the first EU member state to legalize such widespread AI-powered surveillance systems.

Academics, civil society groups, and civil rights advocates say these provisions run counter to the General Data Protection Regulation and the EU's efforts to regulate AI. They argue that Article 7 in particular violates the General Data Protection Regulation's biometric data protection provisions.

French authorities and tech company representatives say that AI software can achieve its goal of identifying and flagging certain types of events without identifying people or running afoul of the General Data Protection Regulation's restrictions on processing biometric data. But European human rights groups point out that if the purpose and function of algorithms and AI-driven cameras is to detect certain suspicious events in public places, these systems will necessarily “capture and analyze physiological characteristics and behavior” of people in these spaces. This includes body position, gait, movements, gestures, and appearance. Critics argue that it is biometric data that is being captured and processed, and therefore the French law violates the General Data Protection Regulation.

AI-powered security – it’s expensive

So far, AI surveillance has been a mutually beneficial success for the French government and AI companies, as algorithmic surveillance becomes more widely used and provides the government and its technical collaborators with much more data than humans alone could provide.

However, these AI-enabled surveillance systems are poorly regulated and rarely independently tested, and once data is collected, the potential for further data analysis and privacy violations is very high.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *