New York City’s New AI Bias Law Will Have Widespread Impact on Jobs, Mandate Audits

Applications of AI


Effective July 5, New York City’s Automated Hiring Decision Tools Act requires employers that use AI and other machine learning technologies as part of their hiring process to conduct an annual hiring technology audit. I was. These audits are performed by third parties and should be checked for intentional or unintentional bias built into these systems.

Failure to comply with the new law, which requires companies doing business and hiring in New York City, can result in fines ranging from $500 to $1,500 per incident.

You might think that these arcane laws don’t apply to me because my company doesn’t have offices in New York City and we don’t use AI. But you are wrong. No matter where your office is located, increased remote work will make New York City candidates more likely to apply for positions at non-local organizations, and cities near you may enact similar laws. .

The Rise of AI-Biased Laws

The New Jersey legislature is considering restricting the use of AI tools in hiring unless employers can prove they’ve conducted bias audits. Maryland and Illinois have proposed legislation banning the use of facial recognition and video analytics tools in job interviews without the candidate’s consent. Meanwhile, the California Fair Employment and Housing Council is considering new mandates to outlaw the use of AI tools and tests that can screen applicants based on race, gender, ethnicity and other protected characteristics. .

Also, claiming ignorance about AI in talent solutions is not a viable excuse. Today, 40% of talent technology solutions have AI integrated in some way. Is your company using AI to filter the hundreds or thousands of resumes it receives each week? If so, your organization is almost certainly subject to this regulation.

This means that hiring executives and lawyers at any company will no longer be able to hide behind algorithms to justify hiring decisions and protect themselves from potential fines and compliance issues.

Preventing AI bias

While this impending regulation will cause many organizations to put the 8-ball on the back burner when it comes to compliance, it’s a strong sign that governments will catch up to emerging technologies before they wreak havoc on the workforce.

Indeed, AI is having a mostly positive impact on enterprise adoption. This made the daunting aspects of hiring easier, like filtering thousands of resumes, and removed unintended biases from the hiring process. Recruiters receive candidates filtered by skills and experience, not by school of origin, place of residence, or when they graduated from high school.

But leaving AI unchecked can also perpetuate unintended biases that violate both local and existing federal law. For example, an organization seeking to increase workforce diversity cannot legally use AI to filter in favor of protected demographics such as pregnant women. This is similar to not being able to filter pregnant women.

Promoting diversity is a positive activity, but when companies recruit candidates based solely on protected classes, it gives some candidates an unfair advantage over the rest, so It would be the same as excluding the candidate.

Organizations currently grappling with the use of AI in their hiring processes and considering how best to comply with regulations should keep this simple fact in mind.

Rely on third parties. This is mandated by New York City regulations, but it’s also good practice. Breakthrough regulations are often accompanied by speculative best practices that may prevent compliance even with strong attention to AI within your organization. Third parties can take the guesswork out of how to implement new regulations.

Transparency is key. New York City regulations are clear on this point. The employer must publicly publish the results of the third-party audit on her website and inform candidates and current employees residing in New York City about her use of AI in hiring decisions. I have to notify you. Email, mail, job postings, or company website. Of course, this only applies to New York-based candidates, but with an increasingly global workforce, it makes sense to apply this level of transparency to all candidates, locations and stakeholders. I’m here.

Audit all recruitment processes with AI technology. Consider diversity hiring as a use case: AI is being hired correctly when you employ AI on the front end to expand your candidate pool and include more diverse candidates in your hiring process. But using AI to decide who to hire at the end of the hiring process would violate federal law as well as his AI regulations in New York City. Understand the best uses for AI and where it may be providing unlawful bias.

New York City’s new anti-stigma law is an important step forward in the ongoing fight against discrimination and prejudice in the workplace. By mandating comprehensive anti-bias audits, the law enables employees to challenge their biases and create a more inclusive and fair working environment. Rather than turn a blind eye, all employers should see this as a first step towards a rapidly approaching global revolution in the future of work.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., publisher of the Bloomberg Act and Bloomberg Tax, or its owners.

Author information

Jonathan Kestenbaum is a Chartered Attorney and Managing Director of Technology Strategy and Partners at AMS, a recruitment solutions provider and consulting firm.

Please write to us: Author guidelines



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *