What goes wrong when police use AI to generate reports?

Applications of AI


Axon — widely used police manufacturer body camera and taser gun (and it also maintains trying to arm the drone)— has developed a new product. It's an AI that writes police reports on behalf of police officers. Draft One is a generative large-scale language model machine learning system. Acquires audio from a camera worn on the body It then turns it into a narrative police report that police can compile and file after the incident. Axon touts the product as the ultimate time-saver for police departments looking to get officers out from behind the desk. But the technology could create new problems for people who encounter police, especially marginalized communities that already make up a disproportionate share of police interactions in the United States.

Accountability and codification of inaccuracies (intentional or not)

I've seen it before. Grainy, shaky police body-worn camera video of an arresting officer yelling, “Stop resisting!” This phrase could lead to further use of force by police officers or increased criminal charges. In some cases, these cries may be justified. However, as we have seen, time and Alsothe story of someone resisting arrest. Misrepresentation. Incorporating AI into the narrative of police encounters could make an already complex system even more susceptible to abuse.

If a police officer says out loud “the suspect has a gun” on body camera video, how does that translate into the final product of a software story?

The public should be skeptical about the ability of linguistic algorithms to accurately process and distinguish between the wide range of languages, dialects, vernaculars, idioms, and slangs used by people. As we've learned by observing the evolution of content moderation online, software may be reasonably good at capturing words, but in many cases, struggle with content and meaning. In tense situations such as traffic stops, if AI mistakes figurative statements for literal assertions, it could fundamentally change how police reports are interpreted.

Moreover, like all so-called artificial intelligence Technology has the power to take over important tasks and decisions and obscure human agency. Police officers who deliberately speak with falsehoods and exaggerations to shape the narrative from body camera footage are given an even more plausible guise of deniability by AI-generated police reports. If a police officer is found to have lied about the contents of a report, it may be possible to say that the officer did not lie. The AI ​​simply mistranscribed what was happening in the confusing video.

It's also unclear how the technology will actually work. If a police officer says out loud on body camera video, “The suspect has a gun,” how does that translate into the final product of the software story? It would be interpreted as “I do,” right? Or? [the officer] “I saw the suspect produce a weapon” or “The suspect was armed”? Or do you just report that the officer said, “I saw the suspect produce a weapon” or “The suspect was armed”? [the officer] Did the suspect yell that he had a gun? ” Interpretation is important, and differences can have fatal consequences for defendants in court.

Review, transparency and audit

Issues of review, auditing, and transparency raise many questions. Draft One allows police officers to edit reports, but how do you ensure that officers are properly reviewing them for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people without follow-up based on the results of facial recognition technology matches, but can such results be used to: This is contrary to the vendor's claims that it should be done. This is a clue for the investigation, not a definitive identification..

Moreover, if an AI-generated report is incorrect, can we trust the police to deny that version of events if it is in their interest to maintain the inaccuracy? Report creation using AI may become possible Going the route of AI-enhanced body camerasIn other words, if reports are consistently generated from audio that police don't like, will they edit it, throw it away, or stop using the software altogether??

And what about the ability of external reviewers to access these reports, coupled with frequent police secrecy? non-compliance Under public records law, how can the public or external agencies independently verify or audit these AI-assisted reports? And how can external reviewers determine which parts of the report were generated by AI? , or how do we know if it was produced by humans?

Police reports are, as they often are, biased and biased, codifying the memory of a police department. These do not necessarily reveal what happened during a particular incident, but rather what the police, well-intentioned or not, imagined happened. Police, with the legal power to kill, detain, and ultimately deny people their freedom, are too powerful for officers to outsource memory creation to technology with impunity to criticism, transparency, and accountability. It is a great institution.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *