Seduction Test: AI and Consumer Trust Engineering

AI Basics


in the 2014 movie Ex Machina, a robot manipulating someone to free them from their limitations, so that the person becomes trapped instead. This robot was designed to manipulate a person’s emotions, and oops, it did. While this scenario is pure speculation, companies are always looking for new ways to better persuade people and change their behavior, such as using generative AI tools. If the act is commercial in nature, we are in FTC territory. It’s a tough trough that businesses need to know to avoid consumer-harming behavior.

Previous blog posts have focused on AI-related topics. to deceive, both in terms of exaggerated and unsubstantiated claims about AI products and the use of generative AI for fraud. The design or use of products may also violate FTC laws if: Unfair – This is what we have shown in some cases and discussed in terms of AI tools with biased or discriminatory results. Under FTC law, any conduct that causes more harm than good is unfair. More specifically, it is unfair if it causes or may cause serious harm to consumers that consumers cannot reasonably avoid and cannot offset against consumers or competition. .

When it comes to the new wave of generative AI tools, companies are starting to use tools in ways that can influence people’s beliefs, emotions, and behaviors. Such uses are growing rapidly and include chatbots designed to provide information, advice, support and companionship. Many of these chatbots are effectively built to persuade and are designed to answer questions in confident language, even if the answers are fictitious. The tendency to trust the output of these tools comes in part from “automation bias”, where people can overly trust answers from machines that appear neutral or unbiased. This also comes from the anthropomorphic effect, and people may come to trust chatbots more if they are designed to use personal pronouns or emojis, for example. People often assume that they are having a conversation with someone who understands them and is on their side.

Many commercial stakeholders are interested in these generative AI tools and the benefits built into them to leverage the human trust that has been missed. Concerns about their malicious use are well beyond the jurisdiction of the FTC. However, the FTC’s main concern is that companies, whether intentional or not, manipulate these in areas such as finances, health, education, housing, and employment in ways that unfairly or deceptively lead people to make harmful decisions. is being used. Design elements that trick people into making harmful choices are a common factor in FTC lawsuits, companies looking at new uses for generative AI, such as customizing ads for specific people or groups. you should know. monetary offer, In-game purchaseand try to cancel the service. Manipulation can be deceptive or unjustifiable when it causes people to act contrary to its intended purpose. Under FTC law, conduct can be unlawful even if not all customers are harmed or if the customers affected are not among the layers protected by antidiscrimination laws. There is a nature.

Another way marketers can take advantage of these new tools and their manipulative power is through advertising. internal It’s the same generative AI feature that allows you to place ads in search results. The FTC has repeatedly investigated and provided guidance on the display of online advertising in search results and elsewhere to avoid deception and unfairness. This includes recent research related to dark patterns and native advertising. Among other things, it should always be clear that ads are ads, and search results and generated AI output should clearly distinguish between organic and paid. People need to know if AI product responses are directed to a particular website, service provider, or product. for commercial relationships. And indeed, people need to know whether they are communicating with a human or a machine.

Given these many concerns about the use of new AI tools, it may not be the best time for companies building or deploying AI tools to fire or fire people dedicated to AI and engineering ethics and responsibilities. I can’t. These reductions may not look good when the FTC calls and wants to convince us that they have properly assessed the risk and mitigated the damage. What could look better? I have provided guidance in previous blog posts and elsewhere. Risk assessment and mitigation should consider, among other things, the predictable downstream use and training needs of staff and contractors, and monitor the actual use and impact of tools that will eventually be deployed. should be dealt with.

While not yet disclosed, FTC staff will choose how companies use AI technology, including new generative AI tools, in ways that can have a real and substantive impact on consumers. We focus on what you can do. And those working with chatbots and other AI-generated content should heed Prince’s 1999 warning: Don’t let the computer take advantage of you. ”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *