You may have heard of simulation theory. This is the concept that nothing is real and that we are all part of a huge computer program. Let’s assume that this idea is not true, at least for the length of this blog post. Nonetheless, we may be headed toward a future where a significant portion of what we see, hear, and read is computer-generated simulations. We always stick to the real thing here at the FTC, but what if no one can tell the difference between real and fake?
In a recent blog post, I explained how the term “AI” can be used as a deceptive selling point for new products and services.it Fake AI problem. Today’s topic is the use of behind-the-screen AI to create and spread deception.this AI fake problem. The latter is a new and more serious threat that companies across the digital ecosystem need to contend with. now.
Most of us spend a lot of time looking at things on our devices. Thanks to AI tools that create “synthetic media” and generate content, an increasing proportion of what we see is not real, making it increasingly difficult to tell the difference. And just as these AI tools have become more sophisticated, they have also become easier to access and use. While some of these tools have beneficial uses, they can also be used by fraudsters to cause widespread damage.
Generative AI and synthetic media are colloquial terms for technologies that simulate human activity, such as chatbots developed from large language models and software that creates deepfake video and voice cloning. There is no evidence that scammers can use these tools to quickly and cheaply generate realistic but fake content that can be spread to large groups or targeted to specific communities or specific individuals. already exists. Use chatbots to generate spear phishing emails, fake websites, fake posts, fake profiles, fake consumer reviews or help create malware, ransomware, prompt his injection attacks There is likely to be. They may use deepfakes and voice clones to facilitate identity theft, extortion, and financial fraud. And this is not a complete list.
The FTC law’s prohibitions against deceptive or unfair conduct apply if you create, sell, or use tools that are effectively intended to defraud, even if that is not the intent or sole purpose. So let’s think about it like this:
Should I manufacture or sell it? If you develop or offer a synthetic media or generative AI product, reasonably foreseeable and often obvious that it could be used for fraud or cause other harm during and after the design stage. Please consider how. Then ask yourself whether such risks are so high that the product should not be offered at all. Sorry for the meme, but let me paraphrase the words of Dr. Ian Malcolm, who plays Jeff Goldblum in “Jurassic Park.” He cautioned that management is too preoccupied with achieving their own interests. can Stop and build what you never thought should do it.
Are you effectively mitigating risk? If you decide to manufacture or provide such products, take all reasonable precautions Before it hits the market. The FTC has sued companies for disseminating potentially harmful technologies without taking reasonable steps to prevent consumer harm. Warning customers about abuse or telling them to disclose information is not enough to stop malicious behavior. Deterrents should be durable, built-in features, not bug fixes or optional features that third parties may modify or remove to break functionality. If your tool is intended to help people, does it really need to emulate a human, or does it effectively look, talk, talk and act like a bot? Ask yourself if you can.
Are you over-relying on post-release detection? Researchers continue to improve detection methods for AI-generated video, images, and audio. Recognizing AI-generated text is even more difficult. But these researchers are in an arms race with companies developing generative AI tools, and scammers using these tools will be gone by the time someone detects fake content. I often put it away. In any case, consumers should not be burdened with knowing if generative AI tools are being used for fraud.
Are you misleading people about what they see, hear, or read? If you are an advertiser, you may be tempted to use these tools to sell just about anything. For example, deepfakes of celebrities are already commonplace and appear in advertising. We have previously stated that misleading consumers via doppelgangers such as fake dating profiles, fake followers, deepfakes and chatbots may be subject to enforcement action by the FTC and in fact I have warned companies that this is the result.
While the focus of this post is on fraud and deception, these new AI tools have potential implications for children, teens, and others at risk of interacting with or being affected by these tools. It comes with many other serious concerns, including physical harm. As companies rush to bring these products to market, and as human-computer interactions continue to take new and possibly dangerous directions, committee staff are closely tracking these concerns.