Many consumer products companies, like others, are turning to artificial intelligence-powered automated decision-making (ADM) technology to streamline operations and improve the customer experience.
Companies implementing AI must go beyond compliance and develop comprehensive risk management strategies.
Government enforcement officials are also watching closely, and recent incidents and guidance from authorities highlight the dangers of not adopting appropriate safeguards for AI deployments.
facial recognition technology
In December, the Federal Trade Commission filed a complaint and proposed a settlement regarding Rite Aid's use of AI-based facial recognition surveillance technology in its stores.
Rite Aid allegedly used facial recognition technology to falsely identify innocent customers as shoplifting suspects, resulting in increased surveillance and false accusations of criminal activity, in violation of Section 5 of the FTC Act. Masu. These misperceptions are said to disproportionately affect women and people of color.
According to the FTC, the pharmacy chain did not conduct reasonable due diligence before purchasing and implementing the technology, did not provide adequate training and supervision for employees using the system, and did not regularly check the system's accuracy. It is said that proper testing and monitoring were neglected. The FTC also argued that the deployment violated a previous settlement of alleged privacy violations.
To resolve this lawsuit, Rite Aid requires five years of use of facial recognition technology, destruction of collected data, and risk management for future use of AI-based “automated biometric security or surveillance systems.” Agreed to a near-total ban on the program. Without “affirmative explicit consent” from the target audience, they are subject to extensive information security programs, annual CEO compliance certifications, and other obligations that are subject to third-party oversight. Apart from the first obligation, these obligations last for his 20 years.
FTC Commissioner Álvaro Bedoya described the risk management program as “the baseline for a comprehensive algorithmic fairness program,” but there are other templates, as discussed below.
Our commitment to privacy
Improper use of data or failure to adhere to privacy initiatives in AI or other ADM systems can also lead to FTC enforcement actions. Recent examples offer several points for companies to consider.
First, the FTC has steadily advanced its investigations and the number of settlements continues to increase. Companies need to take their privacy initiatives seriously and have a systematic program to monitor compliance.
Second, companies that don't take their privacy initiatives seriously or don't have a systematic privacy compliance program in place risk losing their data and the algorithms trained on that data.
The FTC’s standard remedies when privacy promises are broken include “disgorgement” of information and derived algorithms that were improperly collected, retained, and used. Companies need to pay special attention when data and algorithms form a central part of their business plans.
consumer credit law
The use of AI algorithms for credit assessment offers potential benefits to both lenders and borrowers. However, lenders must be careful not to violate the Equal Credit Opportunity Act and the Fair Credit Reporting Act.
ECOA prohibits loan underwriting algorithms that discriminate between different protected classes.., race, gender, welfare receipt status, etc. The FTC insists that lenders test and monitor whether their models potentially lead to unlawful discrimination, even if they do not collect protected class information.
For example, the FTC expects lenders to ensure that AI models do not consider that data that closely correlates with protected class membership would produce illegal results.
Additionally, both ECOA and FCRA require applicants to identify the “key factors” that influence the outcome if a lender takes an adverse action (in the case of FCRA, the decision is based on credit scores). Notification is required.
Both the FTC and the Consumer Financial Protection Bureau have argued that lenders cannot use algorithms in underwriting if they cannot identify material factors in the notice.
Other risks
Additional regulations apply to AI adoption by companies, regardless of sector. For example, if AI-based hiring tools are trained or operated on biased data, they can result in violations of anti-discrimination laws. Additionally, state and local laws address various other uses of AI, so companies should monitor potential new requirements in the jurisdictions in which they operate.
Additionally, just as state and federal laws prohibit making false or unsubstantiated marketing claims, the Securities and Exchange Commission monitors public companies for misleading claims regarding the deployment of AI technology. Masu.
Failure to implement AI can harm businesses in many ways beyond government enforcement.
- If an abnormality occurs in the system of a mission-critical process, operations will be stopped.
- Harmful rumor
- lawsuits from aggrieved individuals (based on tort, contract, or statutory theory); securities class actions from shareholders; or intercompany litigation between customers and suppliers.
protect your company
The risks of AI deployment go beyond government regulations, so companies need to manage them holistically rather than just focusing on compliance. It may seem daunting, but companies without an AI risk management program should start creating one now. As guidance, regulation, and AI use cases continue to proliferate, waiting will only make it more difficult to tackle the problem.
Fortunately, no company needs to reinvent the wheel. The National Institute of Standards and Technology offers a highly regarded AI risk management framework and accompanying handbook. Alternatively, companies can implement ISO standards 42001 and 23894.
If you build your program on one of these foundations, consider how your company's existing privacy policies, codes of conduct, and other policies already address or can be adapted to address your particular risks. Please consider whether.
As you progress, try not to let the perfect be the enemy of the good. An 80% program is much better than nothing. AI technology is rapidly evolving, so there will be many opportunities for revision.
Establishing a strong risk management program can go a long way in ensuring that companies can benefit from AI and other ADM implementations while avoiding regulatory and other land mines.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author information
Raqiyyah Pippinqs is co-leader of Arnold & Porter's Consumer Products Practice Group and Consumer Products and Retail Industry Team.
Peter Schildkraut is co-leader of Arnold & Porter's Technology, Media and Telecommunications industry team.
Alexis Sabet contributed to this work.
Please write to us: Author guidelines