Labeling erroneous AI output as deceptive could be a false overreach by the FTC – Center for Data Innovation

AI For Business


The Center for Artificial Intelligence and Digital Policy (CAIDP) recently filed a complaint with the Federal Trade Commission (FTC) asking them to investigate OpenAI. That complaint alleges that when GPT-4 produces false information, “for the purposes of the FTC, these outputs should best be understood as ‘deceptive.'”[AI that can] create or spread deception; In that article, he warned that it was illegal to “make, sell, or use tools designed to effectively defraud,” and the FTC urged companies to take immediate action to address the risks. I asked you to teach me. However, classifying erroneous output from AI models as “deceptive practices” under FTC law is misguided for four reasons.

First, an incorrect answer is not a cheat, it’s just a mistake. Search engines give wrong answers, GPS systems give wrong directions, and weather forecasts are incorrect. Unless the FTC plans to label all these errors as “deceptive”, you shouldn’t do the same for erroneous AI output. Needless to say, as the poet Alexander Pope famously wrote, “It is human error to make mistakes.” The FTC should not require AI systems to meet higher accuracy standards than other technologies or experts.

Second, even if the FTC believes companies designed some AI systems to deceive others, regulators shouldn’t necessarily stop it. Many legitimate companies make products designed to deceive someone, such as photo editing software, cosmetics, and magic props. In fact, many photo filters already have AI built-in. AI companies should not be arbitrarily targeted unless the FTC plans to shut them all down. Especially if they don’t give false answers to further malicious purposes or harm consumers.

Third, the FTC does not have the authority to regulate AI systems in the manner CAIDP advocates under the FTC Act’s prohibition of “deceptive conduct or practices.” The FTC’s Deception Policy Statement focuses its authority on “representations, omissions, or practices” that may mislead consumers, such as inaccurate information in marketing materials or failure to perform promised services. I am making it clear that there is While it is perfectly reasonable for the FTC to use its powers to investigate his AI company and to investigate false claims about its products, it would not use the same powers to investigate that company’s AI systems. It’s very different from examining the output.

Fourth, such a ruling would derail AI development in the United States. No company can bring a new AI system to market if it has to be 100% accurate all the time. AI systems learn from real-world data, which is often flawed. Imagine if the FTC he ruled in 1938 that radio stations were liable for deception if they broadcast falsehoods. Americans could not enjoy news and sports on the radio.

In conclusion, the FTC’s claim that GPT-4’s mistakes should be considered illegal deception is completely irrelevant. There are many legitimate deceptions that require the attention of the FTC, but GPT-4 is not one of them.

Image credit: Flickr user Emma K Alexandra



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *