Could AI help employers screen for integrity?

AI Video & Visuals


These are the questions surrounding a class action lawsuit filed by Milton resident Brendan Baker against CVS Health in Suffolk Superior Court last month. Brendan Baker failed to get a job at the Rhode Island-based drugstore chain after completing an AI-assisted video interview. According to the complaint, the act was done using his HireVue platform. Mr. Baker has been named as a plaintiff “on behalf of all others similarly situated.”

The use of artificial intelligence is pervasive throughout the employment landscape, raising questions about the role emerging technologies play in the workplace and the potential harm they can cause. Government officials are scrambling to get ahead of calls for stricter inspections and regulations. The White House and several other federal agencies recently announced policies to scrutinize artificial intelligence in the workplace, and the U.S. Equal Employment Opportunity Commission asked employers to analyze the technology used to make hiring decisions. It asks to make sure it’s not discriminatory and warns that employers may be held liable for: actions recommended by those tools, such as who is hired, promoted, or fired.

Courtney Hinkle, an employment attorney in Washington, D.C. who has studied AI in employment, said it’s unclear how decades of legislation will apply to these technological advances, and it’s time to review those laws. He said the more lawsuits against him, the better.

“We are constantly looking for new ways to improve our hiring process, make it fairer and reduce subjective bias,” Hinkle said. “Employers are always concerned about padding past experience.”

But how much artificial intelligence will help or hinder remains to be seen.

Like many other organizations, including T-Mobile, Delta Air Lines and the Boston Red Sox, CVS uses video interviewing platform HireVue to screen job candidates. According to the HireVue blog, HireVue uses AI technology to analyze applicants’ “honesty and honor” in about one-third of interviews, allowing companies to “enhance detection of lies” and “eliminate embellishers.” It is said that they are supporting in this way.

According to the complaint, Baker applied for a supply chain job at CVS around January 2021, using technology developed by Boston company Affectiva to use HireVue’s AI-enhanced interviews to detect facial expressions, eye contact, and more. , tone of voice, and intonation were analyzed. It’s a spin-out from the MIT Media Lab. HireVue says it has since discontinued visual and audio analytics, but still uses machine learning to score applicants’ performance through transcribed responses.

Federal law has prohibited most private employers from using lie detectors to select employees since 1988, and Massachusetts law goes even further, prohibiting the practice altogether. Employers are prohibited from using a polygraph or any other device, mechanism, or instrument to “assist or enable the detection of fraud” as a condition of employment.

According to the complaint, CVS’s use of HireVue’s AI-assisted screening of Massachusetts applicants violated state law, and HireVue screened candidates for a list of questions that could include honesty questions such as: He noted that the responses were recorded. You acted with integrity”, “What would you do if you saw someone cheating on a test?”

Lawyers for Baker declined to comment, as did CVS.

HireVue’s chief data scientist, Lindsay Zuloaga, said in a statement, “Our ratings were not designed to, and have never been, assessed for the truthfulness of candidate responses. No,’ he said. Instead, HireVue uses tools based on “validated industry organizational psychology” to reduce human bias while ensuring that applicant responses are “relevant to statistically significant job-related competencies.” Zuloaga said it helps hiring managers assess whether they are The company says it’s a more reliable and scientific way to focus on skills rather than “just believing what’s written on your resume because it could be exaggerated by the writer.” says.

According to HireVue’s evaluation description, the AI ​​understands the meaning of the candidate’s answers and considers the relative weight of the words. For example, job seekers who use the word “team” score higher on teamwork. The program can also score responses for specific competencies identified in each job, such as problem-solving and communication.

HireVue was also named in a recent Framingham-based lie detection lawsuit. TJX Companies filed nearly identical claims to the CVS lawsuit filed by the same attorneys, which were voluntarily dismissed by plaintiffs. TJX declined to comment.

According to Hinkle’s law school research paper, “Modern Lie Detectors,” published in the Georgetown Law Journal, lie detectors were widely used by employers in the 1980s, and by 1985 an estimated 2 million of job seekers and employees were subjected to polygraph tests. Monica Shah, an employment attorney at the law firm of Zarkind, Duncan & Bernstein in Boston, said the Massachusetts law banning these tests has a broad definition of lie detectors, and the use of AI is growing. This could lead to further challenges, he said. Shah is particularly concerned about the potential for employers to use AI as a way to avoid responsibility for decision-making involving workers.

“There is a concern that there will be a lack of accountability and ownership of decisions made through AI technology,” she says.

And any non-subjective, unbiased analysis that AI is supposed to provide is as unbiased as the data behind it. In 2018, for example, Amazon reportedly scrapped its AI hiring tools after discovering its system for evaluating candidates for tech jobs favored men over women. It turns out that the resumes the machines were trained to analyze were from candidates who had previously applied for such positions, mostly from men.

Still, there are AI recruiters popping up all over the place promising fast, efficient and fair talent acquisition from recruitment to hiring. Tracy Westcott, founder of Swampscott recruitment consultancy Talent Track Solutions, believes employers using these services can invest in proper compliance and training and be transparent with job seekers. say it’s important. Westcott also warned her not to use AI early in the process, before a human can determine if a candidate is a good fit based on the application and resume. However, she believes that these initial reviews will soon be largely automated as well.

Naveen Bateja, Chief Human Resources Officer, New York Medidata Solutions, a life sciences platform that is a frequent speaker on AI in the workplace, says companies should avoid privacy, accuracy and fairness concerns, especially when assessing “complex and multifaceted” human emotions. I warned you to proceed with caution.

There is simply no science when it comes to assessing truthfulness, says Brandeis University social psychologist Leonard Sachs. Prior to passing the Employee Polygraph Prevention Act of 1988, he assisted Congress with research into lie detection. There’s no such thing as a “smoke alarm” that goes off in your brain when you tell a lie, he said, and as far as we know, there’s no way for automated systems to distinguish between lies and truths.

Assessing honesty also requires understanding the context, he said. Take George Santos and Donald Trump, for example. “They have lied so many times that I find it difficult to understand if there are any signs that they are deceiving.”

Hinkle noted that the one-way nature of recorded video interviews also precludes human interaction. Without social cues and conversational banter, candidates can appear awkward or anxious, which can be misunderstood by AI.

“You’re kind of talking into the void,” she said. “Are they going to recognize that uncertainty? Does it seem dishonest and deceptive in a way?”

“Something is just missing in terms of the human element.”


Katie Johnston can be reached at katie.johnston@globe.com. follow her on her twitter @ktkjohnston.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *