AI in the Workplace: Problem Finding for Employers | Mintz – Employment Perspective

Applications of AI


Artificial intelligence is no longer a future consideration for employers. Artificial intelligence is already reshaping how companies hire, manage, and engage employees, and how employees perform their jobs. From AI-powered resume screening tools to automated note-taking applications and generative AI platforms integrated into daily workflows, AI has taken hold in the modern workplace. However, with rapid implementation comes a number of employment law considerations that cannot be ignored by employers.

Below, we identify key areas that employers should focus on to ensure compliance.

We will also be covering the intersection of AI and employment law at the Mintz Employment Law Summits in New York (April 30), Boston (May 7), and San Diego (June 2). The Mintz Employment Law Summit serves as a forum designed to address the most pressing challenges facing employers today, with programming centered around the key issues shaping the workplace now and in the year ahead.

AI in the hiring process and other employment-based decisions

One of the most common and legally significant uses of AI in the workplace is recruiting and hiring, with employers increasingly relying on AI tools to classify resumes, rank candidates, and conduct interviews. A growing number of states and local governments, including New York City, Illinois, California, and Colorado, are enacting or proposing legislation specifically addressing AI in employment decisions. At the same time, existing federal, state, and local antidiscrimination laws, including Title VII, ADEA, and ADA, apply in full force to AI-assisted decision-making, and AI tools trained on historical data can perpetuate bias related to race, gender, age, disability, and other protected characteristics. This creates special vulnerabilities under the disparate influence theory, particularly where employers cannot explain how opaque algorithmic processes arrived at their decisions. Recent lawsuits highlight these risks. in Mobley vs. WorkdayNo. 23-cv-00770 (N.D. Cal.), for example, the court held that an employer could be held liable for an AI-based screening tool that allegedly discriminated against and allowed disparate impact claims to proceed. The court subsequently certified a class action under the ADEA.

And the legal risks extend beyond anti-discrimination compliance. in Kistler et al. v. Eightfold AI Inc. (filed on January 26, 2026), some of the plaintiffs alleged that AI recruiting platforms collected inaccurate information and scored applicants without proper disclosure under the Fair Credit Reporting Act, and framed AI recruiting tools as a consumer protection issue rather than a discrimination issue.

These cases raise important questions about who is legally responsible for automated decision-making, when employers implement “human participants,” and how these processes are audited. Importantly, these questions extend beyond hiring to all employment-based decisions, including performance management and termination when employers use AI systems or tools.

Protecting employee privacy when using AI

Beyond discrimination risks, the use of AI in the workplace, particularly the proliferation of AI-powered recording and note-taking tools, poses additional risks related to workplace surveillance, data security, and consent. For example, using recording tools without consent can violate federal and state wiretapping laws that impose severe penalties, especially those with “all parties” consent provisions, such as those in Massachusetts and California. Some AI recording tools can silently participate in meetings, creating a risk of violations under wiretap laws as well as state biometric information privacy laws (such as the Illinois Biometric Information Privacy Act) if the recordings include employee biometric data, such as voiceprints or facial images. Employers should proactively establish clear policies and secure consent mechanisms, and consult with legal counsel about the implications of using AI recording tools before broadly deploying them.

Revising employment documents to take AI into account

As AI becomes part of daily workflows, employers will need to review and update key employment documents to fill gaps that traditional agreements were not designed to address. for example:

  • Offer letter and employment contract. As AI tools become more sophisticated, it becomes increasingly easy for candidates to use AI to mimic their skill sets, fabricate their experience, and misrepresent their qualifications during the hiring process. To mitigate this risk, employers should consider incorporating certification language in offer letters and employment contracts, requiring employees to affirmatively represent and warrant that the skills, qualifications, training, and professional experience they provided during the application and interview process are true, accurate, and complete, and that material misrepresentation will be grounds for revocation of the offer or termination of employment. This type of provision provides employers with a clear basis for taking action if it is later discovered that a new employee has used AI to artificially inflate their credentials or simulate competencies they do not actually possess.
  • Job details. Employers should proactively update job descriptions to reflect how AI is integrated into specific roles. Job descriptions should identify the specific AI tools, platforms, or capabilities expected for the role, rather than listing vague requirements such as “AI proficiency.” Precise job descriptions are important to defending ADA claims and, in some cases, Title VII and ADEA claims as well.
  • Restrictive Terms Agreement. Standard restrictive covenant agreements were not drafted with generative AI in mind, and clauses that once adequately protected confidential information and trade secrets may now leave significant gaps. Once sensitive information is entered into an open source AI platform, it can be integrated in a way that makes it impossible to separate, delete, or return employer data. Under various state and federal trade secret laws, uploading trade secrets to an open source AI platform can lead to claims that you destroy the secret, demonstrate a failure to take reasonable safeguards, or waive protections to support your reputation. Employers should consider incorporating AI-specific clauses into restrictive terms agreements, such as clauses that explicitly prohibit employees from inputting sensitive information into unapproved AI tools or from using sensitive information to train, fine-tune, or improve AI models or systems.
  • AI policy and handbook updates. All employers should have clear and comprehensive AI usage policies in place. Effective policies should, at a minimum, address authorized AI tools and human participation requirements for data protection, acceptable and prohibited uses, and protocols for AI-assisted note-taking where applicable. The policy should make clear that violations will result in disciplinary action. Employers should also review their employee handbooks to determine what other policies may need to be updated to account for the use of AI in the workplace. Examples include anti-harassment or anti-discrimination policies, codes of conduct, information and security policies, and confidentiality policies.

employee training

Employee training is a critical but often overlooked component of responsible AI adoption. The Department of Labor’s recently published AI Literacy Framework shows that regulators consider AI literacy to be a fundamental expectation of employees. Many practical and legal issues arise in this area. What level of AI literacy is appropriate for each role, and how should employers document and track compliance? How should training programs address the risk of employees entering proprietary data, trade secrets, customer information, or personally identifiable information into unauthorized AI platforms? And what are the consequences if they do?

As AI capabilities rapidly evolve, one-off training will likely no longer be enough, and employers will need to consider how their programs will respond. Training should set expectations regarding acceptable and unacceptable uses of AI and provide guardrails around use case boundaries (i.e., distinguishing where AI can and cannot be used within a workflow), human participation requirements, hallucinations and fabrications, and protecting sensitive data when using AI. Getting these questions right is not only important for operational efficiency, but a well-documented AI training program can serve multiple safeguards for employers, including establishing that the employer has taken reasonable steps to prevent and correct any harm that may result from the use of AI.

AI and employment litigation

In a recent blog post, we discussed decisions that have far-reaching implications for employers in litigation and investigations. in United States vs. Heppnerthe court held that electronic documents created by the defendant using the consumer version of the generated AI tool Claude were not protected by the attorney-client privilege or the work product doctrine. The court reasoned that attorney-client privilege does not exist if a lawyer does not instruct or suggest that a client interact with a generative AI to seek legal advice, and if the tool’s terms of use make clear that there is no expectation of confidentiality of user input. Because Mr. Heppner chose to use Claude on his own, the information he shared with the tool was not privileged, even though he incorporated information from his attorney into the prompts and intended to share Claude’s output with his attorney.

meanwhile heppner While this is an early decision and the case law in this area will no doubt continue to evolve, it nevertheless serves as an important reminder not only for litigation actions but also for employers who rely on AI to conduct internal investigations and assist with investigation-related tasks. Employers should consult legal counsel before using AI tools in situations that may involve privilege or work product protection.

For the future

The adoption of AI in the workplace is accelerating, and the legal landscape is evolving rapidly as well. Employers that take a proactive, cross-functional approach by aligning employment practices, policies, and training programs with new legal requirements will be best positioned to leverage the benefits of AI while managing risk.

[View source.]



Source link