Police use of AI is an “outrageous and unacceptable violation of privacy” – police announce

Applications of AI


From Professor Fraser Sampson, former UK Biometrics and Surveillance Commissioner

Accusing police of deploying “opaque and untested” surveillance tools is nothing new. A quick search online reveals public concerns about facial recognition and other AI-powered technologies being deployed by police departments on an almost daily basis. But last week’s challenge to the latest covert deployment of new technology comes from within policing itself.

A scathing comment from the Police Federation of England and Wales follows revelations that the Metropolitan Police is secretly using AI-powered technology to monitor the movements, communications and data access of its officers. Reportedly, around 600 cases have been brought to light, 42 of which involve senior positions. 100 police officers are currently being investigated for gross misconduct and a further 30 have been “charged with suspicions of suspicious conduct” following what the general secretary of the staff association called an “outrageous and intolerable invasion of privacy”.

There are two aspects to the use of AI-enabled technology by police. The first is law enforcement and other operational functions, which are the headline-grabbing parts. The second is to address the more routine administrative tasks that all large organizations share, such as employee, property, logistics, and financial issues. But given their investigative powers and duties, where is the line for covert internal surveillance to catch rule-breakers? Is this “police”? Intelligence gathering is an important police function, and the company providing the software, Palantir, is named after the magical stone used to gather intelligence in Tolkien’s Lord of the Rings trilogy, so perhaps they might have a clue. Either way, the covert internal deployment of AI-enabled technology in police workplaces is important for several reasons.

First, this broad case for security through the use of new technology is compelling. The public expects high standards of police conduct and an end to abuse. While the same applies to other important public services, this argument also applies to many privately run entities that provide critical functions. The conflict in Ukraine and the Gulf highlights the vulnerability of our nation’s critical infrastructure. Mitigating threats to the water, transportation, food, energy, finance, and communications sectors could similarly justify the use of in-house biometric surveillance.

Second, AI is here to stay. beta state. Once you purchase the kit, you’ll find that it does other things just as well. If something is procured with public funds, there is an obligation to maximize its cost performance. Function creep comes standard with AI. With unrelenting spending pressures and resource challenges, it’s no hypothetical when police will use facial recognition technology to issue warnings to employees suspected of pulling over sick people or interpreting working from home too liberally.

Third, there is a compelling argument for the efficiency of all organizations using AI-powered tools in the workplace to monitor policy compliance. As AI begins to enhance the processing of employee records, employer data will become invaluable in criminal investigations and intelligence gathering. Will your employer share your data when the police come and ask for it?

And fourth, if staff efficiency is anything to go by, the case for implementing biometric technology in safety-critical functions like biometric tachographs becomes inevitable.

Moving the bot to HR was inevitable. Bots may be good at recognizing wanted people on the street, but their game-changing power lies in examining and combining disseminated datasets. Bots are MVPs in both senses when they need to iteratively process large amounts of live data. It becomes simple when you compare the cost of a human checking compliance across multiple layers of organizational policy with the cost of automating a highly transactional process and handing it off to AI. That is why the same US companies are helping the Ukrainian military process real-time intelligence data to increase the efficiency of attacks against Russian forces.

It’s interesting to see the police on the other side of this technology debate. I hope no one reaches for the tired cliche, “If you’re not doing anything wrong, you don’t need to worry,” but I wouldn’t bet on it. So far, the revelations have only spread to the Metropolitan Police, and it remains to be seen to what extent other British forces will follow suit.

While it’s natural for employers to want to explore the benefits that AI can bring, all workers (police and non-police officers alike) need safeguards and assurances. When, where, by whom, and for what purposes can biometric and related data be accessed? The platitude that limits audits to “compliance with relevant policies” is probably not enough to allay the fears of staff and their unions. The general secretary of the Metropolitan Police Federation asks: “Where is the transparency… and where is the reassurance that there are appropriate checks and balances?”

I emphasized that this is what it looks like under regulation and that the police are now looking at it from the other side of the AI ​​telescope. As public watchdogs begin to look inward at biometric technology, people’s views about the need for clearer regulation may change. Some people are already asking, “If police do this to each other, what chance does the rest of us have?” While we wait for the situation, imagine this. If your employer asked you to comment on a detailed analysis of all your meetings, travel, calls, hours worked, breaks, and buildings visited over the last year, would you be in a position to respond? There is a certain level of inequality in arms when it comes to denying content created by military software. If you’re not sure why that should be a concern, ask your British sub-postmaster.

What will the future hold? It is uncertain, but the next global pandemic will show us how quickly the collective surveillance capabilities of employers and governments have evolved.

About the author

Fraser Sampson is a former UK Biometrics and Surveillance Commissioner, Professor of Governance and National Security at CENTRIC (Centre for Terrorism, Resilience, Intelligence and Organized Crime Research), and Non-Executive Director of Facewatch.

Article topics

AI | Biometric Monitoring | Biometrics | Data Privacy | Fraser Sampson | Law Enforcement | Metropolitan Police | Surveillance

Latest biometric news

Dan Saidi is a BIPA buster. Renowned Chicago Attorney, CIPP/US, Partner and Team Co-Leader of the Biometric Privacy Team…

At ID4Africa, trust is steadily moving to the center of conversations around digital public infrastructure and identity, and…

The Metropolitan Police deployed Live Facial Recognition (LFR) technology for the first time at protests this weekend,…

The domino effect continues to fall over global online safety laws targeting social media platforms. Bangladeshi weight is…

Your web browser wants you to think it’s on its side. This is a helpful window into the online world. and…

Suprema features AI-powered facial recognition, fingerprint authentication, and enhanced…





Source link