Large-scale surveillance using AI
With the new and expanded surveillance law in place, French authorities are working with AI companies Videtics, Orange Business, ChapsVision and Wintics to roll out widespread AI video surveillance.
The French government has rolled out AI surveillance at major concerts and sporting events, including a Taylor Swift concert and the Cannes Film Festival, as well as in subway and train stations at peak times. French officials say the tests have been successful and that they are “fully prepared” for further deployment.
The AI software used is typically designed to flag certain events, such as changes in crowd size or movement, abandoned objects, the presence or use of weapons, a body on the ground, smoke or flames, certain traffic violations, etc. The goal of a surveillance system is for it to instantly detect events in real time, such as a crowd surge toward a gate or a person leaving a backpack on a busy street corner, and alert security personnel. Flagging these events seems like a logical and sensible use of technology.
But the real privacy and legal issues arise from how these systems work and are used. How much and what kind of data needs to be collected and analyzed to alert on these events? What is the systems' training data, error rates, and evidence of bias or inaccuracy? How is the collected data subsequently used, and who has access to it?
There is little transparency to answer these questions: despite safeguards in place to prevent the use of personally identifiable biometric data, this information can be captured in training data and systems can be tuned to use it.
France is supporting these private companies by giving them access to thousands of video cameras already installed across the country, coordinating and leveraging the surveillance capacity of rail companies and transport authorities, and allowing the use of camera-equipped drones, giving them legal permission to test and train their AI software on citizens and visitors.