It was serialized last week and this week Massive Defamation Model? Responsibility for AI Output draft. See our previous post on this (including § 230, disclaimers, publications, etc.) here. One of the most important points is that the communication can be defamatory, even if the reader realizes there is a considerable risk of error. I will conclude today with some thoughts on how my analysis focused on defamation can be generalized to other torts.
[* * *]
[A.] false light
Generally speaking, false misrepresented tort claims should be treated the same as defamation claims. Indeed, a distinguishing feature of false tort is that it provides a remedy for false statements about a person. no Defamatory, but merely torturing the person (in a way that would be recognized by any reasonable person test). Perhaps that kind of damage, even the reputational damage, cannot justify the chilling effect on AI companies. In fact, this may be part of the reason why not all states allow false misconduct.
Nonetheless, if the platform is already required to deal with false material, especially outright false citations, through notice and block procedures or mandatory citation checking mechanisms, then this should be seen as a false light. Adapting to the claims of should probably have little additional chilling effect on AI’s valuable design features.
[B.] Disclosure of personal information
LLM is unlikely to generate information that constitutes a wrongful disclosure of private facts. Personal information (e.g., sexual or medical details that have not been made public) about people who have been the subject of unlawful acts may not be included in LLM training data that is based on publicly available sources. It is low. Even if LLM’s algorithm finds false information, it is not a private factual disclosure.
Nonetheless, it is possible that LLM’s algorithms may inadvertently make accurate factual claims about an individual’s private life. ChatGPT appears to contain code that prevents the most common forms of personal information, such as sexual and medical history, from being reported, even if the information is public and not illegal. However, not all LLMs include such constraints.
In principle, notification and blocking remedies should be available here as well. Because disclosure of personal facts generally requires deliberate action, negligence liability should generally be excluded.
[C.] False Statements That May Lead to Injury
What if an LLM outputs information that people are likely to misuse in ways that harm people or property, such as inaccurate medical information?[1]
Current law is unclear as to when falsehoods based on this theory will be prosecuted. The Ninth Circuit Court of Appeals has dismissed product liability and negligence claims against the publisher of The Mushroom Encyclopedia, which allegedly “contained false and misleading information regarding the identification of the most deadly mushroom species.” bottom.[2] Some have First Amendment reasons.[3] However, there is little other jurisprudence on this matter. And the Ninth Circuit’s decision leaves open the possibility of liability in lawsuits alleging “fraudulent, willful, or malicious misrepresentation.”[4]
Again, the model discussed for slander may make sense. If you are responsible for knowingly misrepresenting statements that could lead to injury, an AI company could be liable when it receives actual notice that its programs are generating false factual information. refuses to block that information. Again, imagine that the program is generating what it claims to be actual quotes from a reputable medical information source. However, it is actually created by an algorithm. Such information may appear particularly trustworthy, but it may also be particularly dangerous. And once an AI company gets notified about this fake quote, it should be relatively easy to add code to block the distribution of this fake quote.
Likewise, if a faulty design theory is to blame, e.g. if we neglect to add code that checks the quotes and blocks the distribution of the created quotes, it makes sense for all quotes maybe.
[D.] Accurate statements likely to encourage crime by some readers
AI programs may convey accurate information that some readers can use for criminal purposes. This may include information on how to build bombs, pick locks, circumvent copyright protection measures, etc. It may also contain information identifying certain people who have done things that may be subject to retaliation by some readers.
Whether such “promoting crime” speech is protected from constitutional, criminal and civil liability is a difficult open question that I have attempted to address in another article.[5] But again, liability for distributing such speech knowingly (possibly) or negligently (unlikely, for the reasons explained in that article) If so, the analysis above applies.
However, if the legal liability is limited to Intentional distribution of speech that promotes crime, as some laws and proposals provide;[6] In that case, the company is exempted from such liability unless the employee responsible for the software is actually intentionally seeking to facilitate such a crime through the use of the software.
[1] look Jane Bumbauer Negligence Liability and Autonomous Speech Systems: Reflections on Obligations3 J. Freedom of Speech L. __ (2023).
[2] Winter v. GP Putnam’s Sons, 938 F.2d 1033, 1034 (9th Cir. 1991).
[3] Ditto. at 1037.
[4] Ditto. 1037 n.9.
[5] Eugene Volok Speech that promotes crime, 57 Stun. L. Rev. 1095 (2005).
[6] See ID. in 1182–85.