Director Chopra’s Prepared Remarks on Interagency Executive Policy Statement on “Artificial Intelligence”

Applications of AI


In recent years, the automation of decision making in our daily lives is rapidly accelerating. Across all areas of the digital world and economy, so-called “artificial intelligence” is automating activities in previously unimaginable ways.

Generative AI, which can generate sounds, images, and videos designed to simulate real-world human interactions, is ready to tackle a wide range of potential harms, from consumer fraud to privacy to fair competition. It raises the question of whether

Today, several federal agencies have come together to make one clear point. That is, US civil rights law has no exceptions for new technologies that engage in unlawful discrimination. Companies should be held accountable for their use of these tools.

The interagency statement we are issuing today aims to take an important step in affirming existing laws and curbing illegal and discriminatory practices carried out by those who deploy these technologies.1

The statement emphasizes a whole-of-government approach to enforcing existing legislation and working together to tackle “AI” risks.

Threats posed by so-called “artificial intelligence”

Uncontrolled “AI” poses a threat to fairness and our civil rights in ways that are already being felt.

Tech companies and financial institutions are accumulating vast amounts of data and increasingly using it to make decisions about our lives, such as whether to take out a loan or which ads to show.

Machines processing numbers may appear to be able to take human bias out of the equation, but that’s not really what’s happening. Findings from academic research and news reports raise serious questions about algorithmic bias. For example, a statistical analysis of two million mortgage applications found that black families were 80 more likely to be rejected by an algorithm compared to white families with similar economic backgrounds and credit profiles. % found to be high. The mortgage company’s response was that the researchers didn’t have all the data to feed into the algorithm or perfect knowledge of the algorithm. Artificial intelligence often feels like a black box behind a brick wall.2

If consumers and regulators do not know how artificial intelligence makes decisions, consumers cannot participate in an unbiased, fair and competitive marketplace.

CFPB Actions to Protect Consumers

That’s why the CFPB and others are prioritizing and tackling digital redlines. Digital redlining is the redlining caused by the biases present in lending and home valuation algorithms and other technologies marketed as artificial intelligence. They are disguised by so-called neutral algorithms, but they are built like any other AI system.

We work hard to reduce bias and discrimination regarding home valuations, including algorithmic valuations. Regarding discrimination, we propose rules to ensure that artificial intelligence and automated rating models have basic safeguards.

We also scrutinize algorithmic advertising, which is often marketed as “AI” advertising. We have published guidance on how lenders and other financial providers should be held accountable for their specific advertising practices. Specifically, advertising and marketing that uses sophisticated analytical techniques can expose companies to legal liability, depending on how these practices are designed and implemented.

We also take steps to protect the public from black box credit models. In some cases, the model-dependent financial institution may be too complex to explain the results. Businesses must explain why the credit was declined. Using complex algorithms is no defense against providing specific and accurate descriptions.

Developing ways to improve home valuation, financing, and marketing isn’t inherently bad. However, if done in an irresponsible manner, such as by creating black box models or not carefully examining the data input for bias, these products and services pose a real threat to the civil rights of consumers. . It also threatens law-abiding startups and entrepreneurs to compete with law-breakers.

We are pleased that the CFPB continues to contribute to our government-wide mission to ensure that the collective laws we enforce are upheld, regardless of the technology used.

thank you.

footnote

  1. Joint Statement on Enforcement Efforts Against Discrimination and Prejudice in Automated Systems
  2. Director Rohit Chopra on Trustmark National Bank Enforcement Actions at Joint Department of Justice, CFPB and OCC Press Conference | Consumer Financial Protection Administration (consumerfinance.gov)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *