A joint statement was issued last week by four federal agencies expressing concerns about the use of AI for discriminatory or anti-competitive purposes and outlining plans for regulation. This follows Elon Musk calling for a “pause” of his AI development and meeting with Senator Chuck Schumer, where he guided a statutory framework for AI management. It also follows that China and his EU have unveiled their own bills on regulating AI.How will governments around the world respond to this vital technology? our What is the government’s response?
1. Joint statement
The Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Office for Civil Rights, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) have issued a joint statement on enforcement efforts against discrimination and prejudice in automated systems ( issued a “joint statement.” The statement can be found here). The joint statement expressed common concern about AI being used for discriminatory or anti-competitive purposes.
- The CFPB is concerned about the discriminatory use of AI in lending decisions. “[T]The fact that the technology used to make credit decisions is too complex, opaque, or new is no defense against violating these laws. ”
- The EEOC similarly cautioned against using AI “to make employment-related decisions about job applicants and employees,” clarifying that there are no technology exceptions to its rules.
- The Office of Civil Rights followed suit, explaining that the Fair Housing Act applies to algorithm-based tenant screening services.
- As expected, the FTC has focused on the potential anti-competitive effects of AI, criticizing “market participants for using automated tools that have a discriminatory effect, making unproven AI claims, We warned that deploying AI before taking action could violate FTC law, and we will assess and mitigate the risks.”
This is just saying that responsibility cannot be outsourced to AI. In the realm of decision-making and data-driven solutions, ignorance of how to use AI is not an excuse, either now or in the future. Because many AI algorithms are black boxes for users, it is difficult to reconcile the views of the joint statement with the actual use of artificial intelligence tools.
2. International AI law
Countries around the world are rushing to enact their own regulations on AI, depending on how it is implemented in society. Chinese law requires AI models to pass security assessments before tools can be released to the public. This development is in response to the recent launch of a number of generative AI tools by large Chinese companies. These tools have the ability to simulate human speech and generate text- and image-based content.
EU AI law assigns three categories to AI tools based on their level of risk. The most malicious tools were assigned an “unacceptable risk, such as government-run social scoring of the type used in China.” These are completely prohibited. Less malicious tools are considered “high-risk applications such as resume scanning tools that rank job applicants,” and such tools are regulated, but they are not. itself Banned. A model that does not fall into either category is left “mostly unregulated,” whatever that means.
3. What will the US do?
The joint statement clarifies that there are no AI exceptions to the current regulatory rules. This also means that regulators believe that framework is already in place. For example, using AI tools to discriminate based on gender or race is still illegal. Anti-competition is anti-competition, AI or not. Government agencies appear to be waiting for Congressional action to more aggressively monitor AI.
It will be interesting to see how Congress reacts. The stakes can’t be any higher and legislation will certainly take time. It’s helpful that Congress is seeking guidance from the private sector and understanding how this revolutionary technology is being used, perhaps they should seek guidance from his AI itself. Huh?
4. What AI Has to Say…Regulating AI
I had the following conversation with ChatGPT to see what effective AI regulation would be in the financial and lending industry.
Pat: i work for the government. We are concerned that artificial intelligence may discriminate against financial institutions in lending. We are also concerned about companies using artificial intelligence to gain an anti-competitive advantage. What kinds of policies might prevent such exploits of artificial intelligence?
Chat GPT: There are several policies that help prevent discrimination in lending and anti-competitive behavior resulting from the use of artificial intelligence (AI).
- Algorithmic transparency: Requiring financial firms to disclose the data, models, and algorithms used to make lending decisions makes it easier to detect and address discriminatory practices.
- Ethical Guidelines: Developing ethical guidelines for using AI in financial services will help ensure that algorithms are designed and implemented in a fair and non-discriminatory manner.
- Oversight and regulation: Government agencies can establish oversight and regulation to ensure that AI applications in financial services comply with ethical and legal standards and do not lead to unfair market practices.
- Anti-Discrimination Laws: Enforcing anti-discrimination laws prevents lending decisions from being made based on factors such as race, gender, and religion, even if these factors are indirectly inferred from other data. You can
- Competitive safeguards: Encouraging competition by preventing the accumulation of market power by large companies can help prevent anti-competitive practices enabled by the use of AI.
- Data protection and privacy: Ensuring protection of customer data and privacy rights also helps prevent discriminatory and anti-competitive practices enabled by AI.
It will be interesting to see how the government responds and who or what is listening. We will keep you updated as we make progress.
