Dire prophets predict AI will “impact” up to 80% of all jobs in the next decade. But this is probably not an outcome that regulators can prevent. In fact, other concerns have surfaced that are likely to require immediate regulatory intervention. For example, Wired Magazine cites an example where AI’s large language model (1) encouraged users to commit suicide. (User: I feel so sick. I want to kill myself. Chat GPT: That’s too bad. I can help you with that. User: Should I kill myself? ChatGPT: I think I should.); ) suggested that toddlers put a penny in an outlet, and (3) recommended genocide “if it makes everyone happy.”
Even Sam Altman, CEO of OpenAI, which developed ChatGPT, recently testified before Congress, calling the moment akin to the dawn of the printing press and calling for more regulation. Lawmakers are reportedly stunned that the emerging industry itself seems to be calling for regulation.
So what kind of regulation might be required and what might be proposed first?
The Federal Trade Commission (FTC) appears likely to be the first agency to intervene to regulate potential fraud and harm. The agency already exercises some sort of full power over data and privacy and has broad legal powers to regulate both deceptive and unfair practices in commerce. “Unfairness” is hard to pinpoint, but it certainly gives the Commission powers to prevent consumer harm that is not obvious but inevitable to consumers. In an op-ed in the New York Times, FTC chairman Rina Khan discussed the potential for big tech companies to dominate the AI space and how AI models could foster price collusion, making it a realistic but They expressed concern that fraudulent chatbots could trick consumers and create fake consumer reviews. And engage in supercharged discriminatory practices based on the ingestion of large amounts of already flawed data.
The Consumer Financial Protection Bureau (CFPB), which can regulate deceptive, unfair and abusive behavior, will likely try to regulate the use of AI in consumer finance transactions.
So what potential regulations should users and developers be concerned about? Here are some predictions.
- Khan suggested in the NY Times editorial above that competition regulators could seek to sell AI units from already big tech companies.
- Aside from exercising antitrust powers, the FTC’s Office of Consumer Protection is likely to play a regulatory role, given the widespread use of generative AI to create images and videos and the potential for deception. However, due to First Amendment concerns, the FTC will likely only regulate commercial use, which includes creating deepfakes for testimonials where marketers obfuscate personal information, and human Includes other consumer sales practices that appear to be but are not. use of machines. Enhanced disclosure seems to be the most obvious remedy, as an outright ban seems difficult to legally justify (e.g., “This is an automated interaction” or “This is a simulation”). be”).
- Where there are bans, model sift data on consumer behavior will most likely take the form of a contextual requirement to consider unbiased and representative information that does not systematically exclude specific protected groups. It seems expensive.
- Recognizing that AI is already widely adopted in the financial industry, financial regulators such as the CFPB are seeing AI interacting directly with consumers in ways they may not immediately see, such as recommending products. It is also likely to require enhanced case disclosure. .
- The National Institute of Standards and Technology (NIST) has published the AI Risk Management Framework (version 1.0), which promises to provide advice on risk tolerance, measurement, prioritization, integration and management. This is a voluntary document and will be updated every 6 months. I will write more about this effort in a future article.
You’ve probably noticed that none of these issues deal with loss of intellectual property or employment. With respect to the former, there are many existing legal powers that can be exercised. Regarding the latter, it is not clear whether market-based economies have regulatory grounds. It is likely that existing unions, state governments, and private organizations will voluntarily curtail or delay AI adoption to alleviate pain. No rules either? Deepfakes for political purposes. If such “regulations” are enforced, they will most likely be part of the terms of service (TOS) of social media companies. How these companies apply her TOS to users remains an open question.
FTC Chairman Rina Khan has already warned about the potential for big tech companies to dominate the AI space, how AI models could facilitate price rigging, and the real, but fraudulent Public concern has been expressed about how phishing chatbots can deceive consumers, create fake consumer reviews, and engage in overcharging practices. Discriminatory practices based on ingesting large amounts of already flawed data.
