artificial intelligence. deep fake. Chat GPT. These are terms that most of us didn’t think much of even a few months ago, but now they’re everywhere. There is also.
First, some potential benefits brought by AI. Law enforcement, attorneys, and courts are facing an increase in video evidence from body cameras, squad cameras, and private surveillance equipment. Social media and other electronic data from mobile phones and computers add to this load. In the future, AI algorithms may allow government agencies to sift through that information and alleviate some of the work that is currently a highly labor-intensive process.
However, giving up the human eye in this process is not without risks. Can we trust AI to find what we’re looking for in a pile of videos?
This technique also has a lot of potential for mischief. Deep fakes can be particularly problematic for the court system. I’ve seen some horrifyingly realistic deep fake videos circulating on social media. A CNN reporter used AI to generate voice input into a computer. This was enough to fool my parents.
It is easy to imagine how scammers can exploit this type of technology to obtain personal information and money from people. Also, given how much we rely on audio and video evidence in courts today, the confusion high-quality deep fakes of video can cause – for example, when the defendant is outside the crime scene at the time of the crime. Confusing alibi to prove things — a big potential problem. Plus, as this technology becomes more prevalent, we’re all far less likely to trust our eyes and ears, and doubt any valid evidence.
Another potential problem is prejudice and inequality. AI takes information, “learns” from it, and draws conclusions. If its input information is tainted by bias and systemic unfairness, the output is similarly flawed, but with the added weight of the scientific process behind it. It can unduly influence future jurors and the public, making it much harder to secure a fair trial.
I have no solution for all these problems. Judges, attorneys, and law enforcement need more training on this technology to better protect their systems from potential damage. Being transparent about the algorithms and platforms used by government agencies is essential. And hopefully, we’ll see more innovations to help detect deepfakes so that we can tell the real thing from the manufactured one in court.
Full disclosure: I asked ChatGPT (because it’s a personal device and I’m not allowed to use it at work) for suggestions to write a newspaper column about how AI and deep fakes affect court systems I got the overview. followed here. can you tell me
Dale Harris of Duluth is a Judge of the Sixth Judicial District. He wrote this in an invitation for the News Opinion page of his Tribune.
