Why you need to be wary of AI bias in 2026 and how a bias audit can help you avoid it | Fisher Phillips

Machine Learning


As employers increasingly rely on AI to screen candidates, assess performance, predict risks, and support day-to-day decision-making, it's important that organizations avoid inadvertently introducing “AI bias.” A recent spate of lawsuits alleging that the use of AI tools can unfairly influence certain groups should serve as a warning, even if you believe your systems are ostensibly neutral and are being used in good faith. As we head into 2026, you need to make sure you understand what AI bias is, how it occurs, and why you should consider a bias audit when using AI-driven decision-making tools.

From data to decision: inputs, features, and weights

To understand how AI works and how it can lead to incorrectly biased recommendations and decisions, it can be helpful to take a crash course in AI.

  • AI systems are developed and trained using: data entryalso known as training data. This may include structured data such as resumes, performance metrics, and credit history. It can also include unstructured data such as written text, video interviews, and audio recordings.
  • From these inputs, AI developers can Featuresrefers to a particular variable or characteristic that a system evaluates when making a decision or recommendation. For example, an AI system's ability to evaluate resumes can specify education level and previous job titles as features to consider when evaluating resumes.
  • The system allocates to weight These features determine how important each feature is in producing the final output. For example, the system may value educational background over years of experience, or uninterrupted work history over skill-based evaluation.

What does “bias” mean?

Bias refers to a systematic tendency to favor certain outcomes, characteristics, or groups over others. Bias can be intentional or unintentional, explicit or implicit. Under the legal theory that different influencesFacially neutral conduct may result in liability even in the absence of discriminatory intent if it results in a result that disproportionately affects individuals based on a protected characteristic such as race, gender, age, disability, or other protected status.

AI systems are shaped by the data used to train them. If that data reflects historical biases, structural inequalities, design flaws, or incomplete information, the system can replicate or amplify those patterns. As a result, AI tools can unintentionally reproduce existing inequalities at scale.

Certain features may function as protected characteristics even if the protected characteristics are excluded from the data input provided to the AI ​​system. proxy For protected characteristics, such as zip code, which correlates with race, and employment gaps, which correlate with disability and caregiving responsibilities. The results are shaped both by the selection of features and the weights assigned to them. This means that these design choices can have a significant impact on the results and create large-scale differences.

Common types of AI tools used by employers and businesses

Although AI comes in many familiar forms, it is often not classified as such.

  • Prediction tools: Predictive analytics uses historical data to identify patterns and predict future outcomes, such as sales performance or creditworthiness. Because these tools rely on past decisions, they can reinforce existing disparities if historical practices are biased.
    • example: Insurers use predictive analytics to estimate the likelihood that a policyholder will make a claim in the future and to inform pricing based on previous claims data.
  • Machine learning system: Machine learning models learn patterns from large datasets without using fixed decision rules. During training, the model continually adjusts the weights assigned to different features based on previous results. Although this adaptability improves accuracy, it can also make it difficult to identify biases, especially as the model evolves over time.
    • example: Banks use machine learning models to evaluate loan applications by learning from historical loan data and adjusting the weights assigned to factors such as credit history and repayment behavior.
  • Scoring, ranking and recommended tools: Many AI systems generate scores, rankings, and recommendations that inform and influence human decisions, such as applicant rankings and performance scores. Even if humans remain involved, there is a risk of over-reliance on automated output, potentially reducing meaningful oversight.
    • example: Resume screening tools rank applicants by learning which candidates are progressing through the hiring process and adjusting evaluation criteria based on previous hiring results.
  • Language-based generation tools: Some AI systems analyze or generate language for things like resume screening tools, chatbots, and performance summary tools. Because these systems are trained on large amounts of text, they can reproduce the patterns and assumptions present in the training data.
    • example: AI systems generate automated email or chat responses to customer inquiries based on patterns learned from previous communications.

Where AI bias creates legal risks: different impacts and detection challenges

One of the most significant challenges with AI bias is that it is difficult to detect. Many AI systems operate as “black boxes,” making it difficult to understand how inputs are transformed into results. Without intentional testing and documentation, biased results may go unnoticed until regulatory investigation or litigation occurs.

80/20 rule

Potential measures of various impacts when AI systems are involved are often evaluated using the 80/20 rule. Under this framework, a protected group's selection rate that is less than 80% of the most favored group's selection rate may indicate a potential negative impact. The 80/20 rule is not a definitive test for discrimination. Instead, they serve as screening mechanisms or warning indicators that may warrant closer review of specific practices or decision-making processes.

For example, if 60% of male applicants pass the screening assessment and only 45% of female applicants pass, the resulting ratio of 75% is below the 80% threshold. Although this result does not establish discrimination in itself, it does indicate potential bias and may prompt further analysis.

Why AI bias matters to employers

AI bias is important because it can create legal liability even when there is no discriminatory intent. Employers and businesses can be held liable for practices that disproportionately impact protected groups if the consequences are not job-related and consistent with business necessity.

As AI tools play an increasing role in employment decisions, courts are scrutinizing how these systems operate and whether they contribute to discriminatory outcomes.

  • As an example, Mobley vs. Workdaya class action lawsuit pending in California federal court, in which job seekers allege that Workday's AI-based selection tools systematically rejected more than 100 applications.
  • Similarly, Harper vs. SiriusXMThe petition, pending in Michigan federal court, alleges that employers relied on AI-powered applicant tracking systems that were imbued with historical bias by using data points that served as proxies for race, resulting in candidates being downgraded and eliminated before they even progressed through the hiring process.

What employers can do now: Consider an AI bias audit

As AI-powered tools become more integrated into hiring decisions, employers must take proactive steps to assess and mitigate bias using a defensible, structured approach. Through our AI fairness and bias auditing solutions, Fisher Phillips helps employers assess risk and implement practical safeguards, including:

  • Identify where AI tools are used throughout the employment lifecycleto identify where automated decision-making can create risk, including recruiting, hiring, onboarding, performance management, staffing and assignment decisions, employee relations, retention, and termination.
  • Conduct bias audits and compliance reviews of third-party AI vendorsThis includes reviewing vendor-provided bias audits and documentation, assessing whether the tool qualifies as an automated decision-making tool under current and emerging legislation, and advising on practical risk mitigation strategies.
  • Audit of in-house developed or custom-built AI toolsThis includes statistical tests for differential impacts across protection categories for which data are available, explainable AI-based root cause analysis, and recommendations to reduce or correct identified biases.
  • Providing privileged legal assessments and regulatory documentationThis includes cross-jurisdictional compliance analysis, guidance on disclosure obligations, and summaries suitable for internal governance or external review.
  • Establish an AI oversight and governance frameworkThis includes regular bias audits, AI-assisted monitoring, regulatory updates, compliance workshops, and policy update guidance to address evolving legal requirements over time.

Fisher Phillips collaborates with AI employment analytics firm BLDS and AI fairness software provider SolasAI to deliver these services, providing employers with an integrated and legally defensible approach to AI bias auditing, compliance, and governance.



Source link