California's Two-Pronged Approach to Regulating AI in Employment and Beyond

AI News


California lawmakers and regulators have taken important steps to regulate the use of artificial intelligence (AI) in making critical decisions, such as employment decisions, to prevent algorithmic discrimination. The California Civil Rights Council has proposed amendments to the Fair Employment and Housing Act (FEHA) that specifically target employment discrimination by automated decision-making systems. At the same time, the California Legislature continues to debate AB 2930, a broad measure that would address the use of AI in a variety of sectors. These efforts aim to ensure that technological advances do not perpetuate existing biases or create new forms of discrimination.

California Civil Rights Council Proposed Rule

The California Civil Rights Council has proposed amendments to the FEHA in response to growing concerns about algorithmic bias in hiring practices. These amendments aim to modernize hiring practices and align them with broader efforts such as the White House’s AI Bill of Rights Blueprint and the EEOC’s Algorithmic Fairness Guidelines.

Definition and scope of AI

Under the proposed amendments, an “automated decision system” is defined as a computational process that screens, evaluates, categorizes, recommends, makes, or facilitates decisions affecting applicants or employees. This includes systems that use machine learning, algorithmic, statistical, or other data processing or AI techniques. The proposed amendments cover activities such as computer-based testing, targeted job advertising, resume screening, and online interview analysis.

Who is affected?

The proposed rule would apply to any organization that regularly makes payments to five or more individuals for work or services, such as an employer’s agent or employment agency. This broad definition broadens coverage and liability for discriminatory practices that arise from the use of automated decision-making systems.

Impact on employers

Employers who use automated decision systems must ensure that these systems do not result in adverse effects or unequal treatment based on characteristics protected by FEHA. Employers can be held liable for discrimination resulting from these systems; however, they can defend their use by showing that the criteria were job related, had a business necessity, and no less discriminatory alternatives were available. Employers must conduct anti-bias tests and keep relevant records for at least four years.

Consideration of criminal history

The proposed amendments clarify the role of automated decision-making systems when considering an applicant's criminal history. Employers must follow the same rules as human reviews, including not assessing criminal history until a conditional offer has been announced and providing the applicant with the generated report and evaluation criteria.

Record-Keeping Obligations

The proposed rule would require records related to the training, operation, and output of automated decision-making systems to be retained for at least four years, including data used by third parties providing such systems, to ensure transparency and accountability throughout the hiring process.

Public hearings

On July 18, the California Civil Rights Council held a public hearing at the University of California, Berkeley School of Law to discuss proposed amendments to the Fair Employment and Housing Act regarding automated decision-making systems. The hearing heard testimony urging the Council to refine the definition and scope of the proposed rules, particularly with regard to employment agencies, the vague concept of “screening,” and the distinction between simple automation that drives a process and automation that makes decisions. The Council assured attendees that it would carefully consider all written and oral testimony as it revise the proposed rules. Additional information and updates will be posted on the Council's website.

California AB 2930

California's AB 2930 seeks to regulate the use of AI across various industries to combat “algorithmic discrimination,” which the bill defines as unfairly discriminating against or treating an individual less favorably based on a protected characteristic.

Scope and impact assessment

AB 2930 targets “automated decision-making tools” that make “significant decisions” that affect sectors such as employment, education, housing, health care and financial services. By Jan. 1, 2026, employers and developers will be required to conduct annual impact assessments to analyze potential adverse effects and implement safeguards to address the risks of algorithmic discrimination.

Notification Requirements

Employers who use automated decision-making tools must inform individuals who are subject to the resulting decision of the purpose, contact information, and a clear explanation of the tool. If the decision is based solely on the output of an automated tool, employers must accommodate requests for an alternative selection process, if practicable.

Governance Program

Employers must establish a governance program to address the risks of algorithmic discrimination, including designating a responsible person, implementing safeguards, conducting annual reviews, and maintaining the results of impact assessments for at least two years. Small employers with fewer than 25 employees are exempt as long as their system does not affect more than 999 people per year.

Policy Disclosure Requirements

Employers and developers will be required to publish policies outlining the types of automated decision-making tools used and how they manage the risks of algorithmic discrimination.

Civil Liability

Individuals may bring a civil lawsuit against employers who violate AB 2930 and may receive damages, declaratory relief, and attorneys’ fees. Public defenders may also bring civil lawsuits for violations.

Legislative History

California's AB 2930 was introduced in February 2024. The bill passed the House in May and the Senate Judiciary Committee in July, then was sent to the Appropriations Committee. The state Legislature is currently on summer recess and is scheduled to reconvene on August 5. Lawmakers must finalize AB 2930 by August 31, the last day for each house to pass legislation.

lastly

California’s regulatory approach aims to prevent algorithmic discrimination in the areas of employment and other important decision-making through the California Civil Rights Council’s proposed rule and AB 2930. As both measures remain actionable, employers should remain vigilant and continue to assess and stay informed about their ability to comply with these evolving regulations, especially given the broad definition of tools that may be considered AI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *