Can AI commit defamation? We are about to find out

AI News


Image credit: Roy Scott/Getty Images

The tech world’s hottest new toy could find itself in legal trouble, as the AI’s tendency to invent news stories and events runs into defamation laws. Can an AI model like ChatGPT even commit libel? Like many things surrounding technology, it’s unknown and unprecedented, but upcoming legal issues could change that. I have.

Defamation is broadly defined as publishing or making harmful and false statements about someone. It’s a complex and nuanced legal area, and it varies greatly from jurisdiction to jurisdiction. Defamation cases in the United States are very different from those in Britain and Australia, where today’s drama is set.

Generative AI has already created many unresolved legal questions, such as whether use of copyrighted material constitutes fair use or infringement. But until at least a year ago, neither the AI ​​models that generate images nor text were good enough to generate anything confusing with reality.

Not so much now: the massive language models behind ChatGPT and Bing Chat are the bullshit artists working at enormous scale, and they’re the ones that dominate the mainstream world of things like search engines (and increasingly everything else). There is no doubt that the integration with the product lifts the system from glitchy experimentation to massive scale. publishing platform.

So what if a tool/platform writes that a government official has been indicted for wrongdoing, or a university professor has been accused of sexual harassment?

A year ago, without broad integration and unconvincing wording, few would say such false statements could be taken seriously. But today, these models are confidently and compellingly answering questions on widely accessible consumer platforms. Even if those answers are hallucinations or falsely attributed to non-existent articles. They attribute false statements to genuine articles, true statements to inventions, or they make up the whole thing.

By the nature of how these models work, they don’t know or care if something is true. Only if it looks real. Sure, it’s a problem when you’re using it to do your homework, but it can be defamatory at this point when accusing you of a crime you didn’t commit.

This is the claim Brian Hood, Mayor of Hepburn Shire, Australia, made when he was informed by ChatGPT that he had been named as the person convicted in a bribery scandal 20 years ago. The scandal was real, and Hood was involved. But he went to the authorities about it and was never charged with the crime, as Reuters says his attorney says.

Now, it’s clear that this statement is false and undoubtedly hurts Hood’s reputation. But who made that statement? Did OpenAI develop the software? Was it Microsoft that licensed and deployed it to Bing? Is it the software itself that acts as an automated system? If so, who is responsible for prompting that system to make a statement? Is it like a conversation? Is it slanderous then? Did OpenAI or ChatGPT “know” that this information was wrong, and how would you define negligence in such cases? Can AI models show malice? ? Is it up to the law, the case, the judge?

All of this is an open question, as the technology to which they relate did not exist a year ago, let alone when laws and precedents were established to legally define defamation. It may seem silly to sue chatbots for false statements, but chatbots will never be what they used to be. These are no longer toys, but tools that millions of people use on a regular basis, as some of the world’s largest companies are proposing them as the next generation of information retrieval alternatives to search engines.

Hood has sent a letter to OpenAI asking them to do something about this – it’s not clear under Australian or US law what it can do, whether it’s compulsory or otherwise. No. But in another recent case, citing a fictional article in The Washington Post, a law professor was accused of sexual harassment by a chatbot. And such false and potentially damaging statements may be more common than we think.

This is just the beginning of this courtroom drama, and even lawyers and AI experts have no idea how it will play out. But if companies like OpenAI and Microsoft (not to mention every other major technology company and hundreds of startups) expect their systems to be taken seriously as sources of information, The consequences of those claims cannot be avoided. They may suggest recipes or travel plans as a starting point, but people understand that companies say these platforms are the source of truth.

Will these troubling statements turn into actual lawsuits? Will these lawsuits be resolved before the industry changes again? It’s going to be an interesting few months (or maybe years) as technology and legal professionals attempt to tackle the industry’s most rapidly changing targets.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *