AI may be accelerating the trend, but the trend of embellishing and even lying on job applications is nothing new or unusual. In fact, a 2017 US-based study found that 72% of applicants doctored their resumes, and 31% included outright fabrications. These findings have been replicated in other studies in subsequent years, but the recent addition of generative AI has created an increasingly difficult situation for recruiters and HR departments.
According to a study published in April by Resume Genius, 59% of job seekers have already used AI to create their resume, 22% have used AI to answer interview questions, and 19% have used AI to complete a skills assessment. While this is already alarming enough, companies are also facing a wave of completely fraudulent remote workers using AI to create fake identities. They are often based in North Korea. These candidates have funneled their salaries to the (sanctioned) North Korean government, and some have accessed and downloaded sensitive data, posing serious cybersecurity risks to their organizations.
Companies are already waking up to such risks, but some are more cautious than others. One of them is Equifax Workforce Solutions, whose senior vice president and general manager of talent solutions, Bart Lautenbach, was interviewed. narratorleverages AI to address the emergence of remote worker fraud. While this threat is expected to continue to grow in the near future, Rautenbach asserts that there are some important measures organizations can take to reduce the potential risk. That way, you can spend more time on legitimate job seekers.
From “inflated resumes” to “synthetic identities,” HR teams face a variety of AI-powered challenges
Rautenbach acknowledged that AI abuse is becoming increasingly common among job seekers, with cases involving remote North Korean workers being the most notorious. But he also explains that this issue is one that Equifax Workforce Solutions, a subsidiary of Equifax, has experienced first-hand.
“Customers have reported that they are witnessing malicious actors using synthetic identities to get through the application process,” he says. “This problem extends from early recruitment stages to the onboarding process, with people increasingly exploiting fake AI-enhanced documents to prove their eligibility to work in the United States.”
The first instance of this type of activity was reported in August 2022, but Rautenbach notes that the problem is likely to become more widespread in the coming years. “Gartner predicts that by 2028, one in four job applicants will be an imposter,” he says.
And to further complicate matters for recruiters and HR teams, AI is now being used across the spectrum in job searches, from “inflated resumes” to completely fictitious identities. “The problem of fabricated or misleading information on job applications is very pervasive, and is made worse by the reality that AI-generated resumes make this information even more difficult to detect,” he explains.
“HR teams are forced to manually sift through potentially thousands of fraudulent applications.”
Equifax conducted its own survey of human resources professionals on this issue and found that 71% had received “fabricated or misleading” information from candidates. However, according to Rautenbach, “only 20% of this group said they were ‘very confident’ in detecting fabricated or misleading information on resumes.”
The inability to confidently detect this false information is a serious problem for businesses, and not just for cybersecurity-related reasons. Rautenbach explains that this poses a “double jeopardy for employers by posing serious security risks to sensitive internal systems and potentially massive losses in productivity as HR teams are forced to manually sift through thousands of fraudulent applications.”
Mapping self-reported claims to situated knowledge
The question therefore becomes what companies can do to not only eliminate fraudulent candidates, but to eliminate them effectively. And for Rautenbach, HR departments should strive to proactively validate candidates through a combination of “data-driven screening” and contextual knowledge.
“Research shows that 93% of job seekers have fabricated or lied during the hiring process. [that] “A person’s identity is an important guardrail. Completing a background check to verify identity, education, and employment, and comparing that data to a candidate’s resume, can confirm trust in a new hire,” he explains.
What this means in practice is that, alongside the traditional application and interview process, a system of validating and cross-checking candidate information will need to be run, perhaps by the IT or security department, wherever public or third-party information about the candidate is available. “By shifting the hiring framework from simply verifying claims to establishing a verified history, employers can ensure they prioritize genuinely qualified candidates while reducing business risk,” Rautenbach added.
“HR departments should strive to proactively validate candidates through a combination of ‘data-driven screening’ and contextual knowledge.”
In addition to a data-first validation approach, Rautenbach recommends that HR teams move beyond superficial resume reviews to map a candidate’s self-reported background to the contextual knowledge they need to demonstrate during the interview. He says, “Asking technical, contextual questions about how and why previous projects were done can help recruiters identify when a candidate may lack the depth of experience shown on a polished, potentially fraudulent resume.”
Employing a combination of these strategies can significantly reduce your exposure to misleading or fraudulent job applicants. Although these may not be silver bullets, they can be highly effective when implemented as part of a routine screening system. That said, companies need to remain aware of how quickly AI tools are evolving, as today’s fraud may become obsolete tomorrow.
AI may be accelerating the trend, but the trend of embellishing and even lying on job applications is nothing new or unusual. In fact, a 2017 US-based study found that 72% of applicants doctored their resumes, and 31% included outright fabrications. These findings have been replicated in other studies in subsequent years, but the recent addition of generative AI has created an increasingly difficult situation for recruiters and HR departments.
According to a study published in April by Resume Genius, 59% of job seekers have already used AI to create their resume, 22% have used AI to answer interview questions, and 19% have used AI to complete a skills assessment. While this is already alarming enough, companies are also facing a wave of completely fraudulent remote workers using AI to create fake identities. They are often based in North Korea. These candidates have funneled their salaries to the (sanctioned) North Korean government, and some have accessed and downloaded sensitive data, posing serious cybersecurity risks to their organizations.
Companies are already waking up to such risks, but some are more cautious than others. One of them is Equifax Workforce Solutions, whose senior vice president and general manager of talent solutions, Bart Lautenbach, was interviewed. narratorleverages AI to address the emergence of remote worker fraud. While this threat is expected to continue to grow in the near future, Rautenbach asserts that there are some important measures organizations can take to reduce the potential risk. That way, you can spend more time on legitimate job seekers.