AI and Policing: How New Technology Tracks You

AI Video & Visuals


Submit
12:00 PM EDT

06.07.2025

Artificial intelligence is changing the way police investigate crimes and monitor citizens as regulators struggle to maintain their pace.

The photo shows the video surveillance camera in focus. A man in a bright blue sweater is walking down the street in the background.

In 2019, video surveillance cameras were installed on the side of a San Francisco, California building.

This is the Marshall Project's closing discussion newsletter, delving deep into weekly criminal justice issues. Want to deliver this to your inbox? Sign up for future newsletters.

If you are a regular reader of this newsletter, you know that changes in the criminal justice system are rarely linear. It slows down and starts appropriately by bureaucracy, politics and mere inertia. Reforms are passed on a daily basis, then rolled back, boning, or tied up in court.

However, there is one corner of the system, and changes occur rapidly and almost completely in one direction. It is adopting artificial intelligence. From facial recognition to predictive analytics to the rise of increasingly compelling deepfakes and other synthetic videos, new technologies are emerging faster than agents, lawmakers, or watchdog groups.

A recent report by The Washington Post shows that over the past two years, police officers have quietly received real-time alerts from private networks of AI-equipped cameras, flagging the locations of people on the wanted list. Since 2023, the technology has been used in dozens of arrests and has been rolled out this year in two well-known cases that have pushed the city into the national spotlight. The New Year's Eve terrorist attack killed 14 people, injured nearly 60 people, and 10 escaped from city prison last month.

In 2022, city council members tried to place guardrails in the use of facial recognition and pass ordinances restricting the use of the technology to certain violent crimes, requiring supervision by trained examiners at state facilities.

However, these guidelines assume that police are doing the search. Police in New Orleans have hundreds of cameras, but the alert in question came from a different system. It is a network of 200 cameras with facial recognition, installed by residents and businesses on private property and supplies videos to a nonprofit organization called Project Nora. Police officers who downloaded the group's app were notified when people on the required list were detected along with their location on the camera network.

That's irritating Louisiana's civil liberty groups and defense attorneys. “If we make this a private organisation, all the guardrails that should have been in place for law enforcement and prosecution are no longer there. We don't have the tools to do our job, while supporters of this effort say they have contributed to a significant decline in crime in the city.

Police said they would suspend the use of the technology just before the post office investigation was made public.

New Orleans is not the only place law enforcement has found a way around city-imposed restrictions for facial recognition. Police in San Francisco and Austin, Texas, bypassing restrictions by helping nearby law enforcement agencies perform facial recognition searches or asking law enforcement partners, according to a report in a post last year.

Meanwhile, at least one city is considering new ways to acquire the use of facial recognition technology. By sharing millions of prison booking photos with private software companies in exchange for free access. Last week, the Milwaukee Journal Sentinel reported that the Milwaukee Police Department was considering such a swap, leveraging 2.5 million photos in exchange for a $24,000 search license. City officials say using technology only in continuous investigations will not establish a presumed cause.

Another way departments can skirt facial recognition rules is to use AI analytics that is technically face-independent. Last month, a review from the Massachusetts Institute of Technology focused on the rise of a tool called “trucks” offered by the Veritone company. It can identify people who use “body size, gender, hair color, style, clothing, accessories.” In particular, it cannot be tracked by skin color using algorithms. The system is not based on biometric data, and therefore avoids most laws aimed at suppressing police use of certain technologies. Additionally, law enforcement can track people who may have a bad angle of masks and cameras.

In New York City, police are using AI to find ways to identify people not only by their face and appearance, but also by their actions. “If someone is acting, it's irrational… it could cause an alert that triggers a response from either the Security Bureau and/or the Police Department,” said Michael Kemper, the Metropolitan Transportation Bureau's chief security officer, in April, according to Verge.

Beyond people's physical locations and movements, police use AI to change the way they interact with suspects. In April, Wired Magazine and 404 Media reported on a new AI platform called Massive Blue. Some applications of the technology include gatherings of intelligence from protesters and activists, as well as undercover investigations aimed at seducing people seeking sex with minors.

Like most of the things AI employs to do, this kind of manipulation is not novel. A few years ago, I featured the Memphis Police Department's efforts to connect with local activists via a department-run Facebook account for a fictional protester called “Bob Smith.” But like many aspects of AI that are emerging, it's not a new intent – that's why digital tools for these kinds of effort are more convincing, cheaper and scalable.

But the sword is cut in both ways. Police and legal systems are more broadly competing and conflict with increasingly sophisticated AI-generated materials in the context of investigation and evidence in the examination. Lawyers are worried about the possibility of videos generated in deepfake AI that can be used to create fake alivis or commit a crime by mistake. Second, the technology creates the possibility of “deepfake defense” that raises doubts even the most clearest video evidence. These concerns became even more urgent with the release of Google Gemini's hyper-realistic video engine last month.

There are also questions about the use of low-college AI replicas in courts. Last month, an Arizona court saw an impact statement of murder victims created with AI by the family of a man. According to local news reports, the lawyers for the man convicted in the lawsuit filed the appeal questioning whether the emotional weight of the composite video had influenced the judge's decision.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *