Judge says ICE used ChatGPT to create use-of-force reports

Applications of AI


Last week, a judge handed down a 223-page opinion criticizing the Department of Homeland Security for conducting raids targeting illegal immigrants in Chicago. Buried in the footnotes were two sentences revealing that at least one law enforcement officer was using ChatGPT to generate reports intended to document how officers used force against individuals.

The decision, written by U.S. District Judge Sarah Ellis, took issue with the conduct of Immigration and Customs Enforcement and other agency officials during the so-called “Operation Midway Blitz,” which resulted in more than 3,300 arrests and more than 600 detained by ICE, including repeated violent confrontations with protesters and citizens. These incidents were to be recorded by authorities in use-of-force reports, but Judge Ellis noted that there were often discrepancies between what was seen on tape from officers’ body-worn cameras and what was ultimately recorded on paper, and as a result found the reports unreliable.

But beyond that, she said, at least one report was not even written by a police officer. Instead, according to her footnote, body camera footage revealed that the agent “asked ChatGPT to piece together a narrative for the report based on a short text and several images about the encounter.” The officer reportedly submitted the output from ChatGPT in his report despite the fact that the information provided was very limited and the rest was likely filled with speculation.

“To the extent that agents are using ChatGPT to generate use-of-force reports, this further undermines their credibility and may explain the inaccuracy of these reports in light of the following.” [body-worn camera] “Video,” Ellis wrote in a footnote.

It’s unclear whether the Department of Homeland Security has a clear policy regarding the use of generative AI tools for reporting, according to the Associated Press. At the very least, one might think that generative AI filling in the gaps with completely fabricated information when there is nothing to draw from the training data is far from best practice.

DHS has a page dedicated to the agency’s use of AI, and after test-running off-the-shelf chatbots, including ChatGPT, introduced its own to help employees complete “routine tasks,” the footnote does not indicate that any of the agency’s internal tools were used by employees. This suggests that the person who completed the report visited ChatGPT and uploaded information to complete the report.

No wonder one expert told The Associated Press that this is the “worst case scenario” for law enforcement’s use of AI.



Source link