On March 20, 2023, the Federal Trade Commission (FTC) released a blog post advising companies to consider deceptive or abusive use when using artificial intelligence (AI) tools to generate synthetic media. has been released. The FTC calls using AI to create or spread deception: AI fake problem” and it is a rising problem. Fraudsters have used generative AI and synthetic media to generate and propagate false narratives at scale and at low cost, according to the FTC. The agency warns that AI chatbots can be used to create phishing emails, fake her websites, fake profiles, generate malware, ransomware and prompt him injection attacks. doing.
Over the past few months, the FTC has been closely monitoring the development of AI technology and has issued several AI-related guidance. For example, in February 2023, authorities will urge companies that rely on (or allegedly rely on) AI to exaggerate product claims or pose reasonably foreseeable risks to consumers. I published a blog to warn you not to explain. A recent FTC blog post shows the FTC’s continued focus on this topic. Businesses should be aware that the FTC can initiate enforcement actions to punish conduct that the FTC deems unjust or deceptive.
Blog post summary
The FTC asks companies to ask four questions before creating, selling, or using AI:
1. Should you make or sell it? The FTC is directing companies that develop or offer synthetic media or generative AI products to consider this during the design phase.
2. Are you effectively mitigating risk? The FTC directs companies that develop or offer AI products to take “reasonable precautions” before entering the market. The FTC deems it insufficient to warn consumers or require disclosure from users of misuse. Instead, the agency advises companies to create deterrents that are “durable, built-in features” and “not bug fixes or optional features” that bad actors can change or remove. FTC urges companies to rethink whether products really need to be anthropomorphized, emulate humans, or act like bots to achieve a similar effect. adds that there is
3. Are we relying too much on post-launch detection? The FTC acknowledges that researchers are improving their ability to detect AI-generated content, but it is still “in an arms race with companies developing generative AI tools, and we are trying to Fraudsters using them are often on the move by the time someone detects counterfeit content.” suggests that it should be in the enterprise rather than
Four. Are you misleading people about what they see, hear, or read? The FTC is telling advertisers to think twice before using AI-generated content. The agency says it has warned companies against misleading consumers via fake dating profiles, fake followers, deepfakes, chatbots, etc. and has taken enforcement action against them.
Chatbots and generative AI pose many legal and business risks. I covered this in detail in a recent article. As the FTC’s recent guidance shows, one of these risks comes from consumer protection agencies, which use laws designed to prevent deceptive trade practices to regulate AI tools. There is a possibility. As the FTC points out in its blog, the FTC has taken enforcement action against companies that use fake online content, requiring them to destroy the underlying algorithms that power their systems. increase. Companies using this technology should carefully and deliberately consider the FTC’s guidance and best practices on AI development and use.
Italian privacy regulators recently temporarily restricted OpenAI’s ChatGPT from processing data of individuals residing in Italy. We will continue to update you on key developments in legal and business risks such as generative AI.