
If you’re a recruiter, chances are you spend the majority of your workday reading cover letters and resumes created using AI. At least, that’s the conclusion you might come to after looking at recent research on the subject, including one published by CharityJobs in February. The research revealed that 64% of applicants in the UK charity sector would approve of using AI as part of the recruitment process in 2025. This is up from 52% in 2024 and very similar to the percentage found in a January study by EdTech company Kahoot! Gen Z workers based in the UK are using LLMs to write cover letters and other application materials.
These numbers are just the tip of a very large iceberg, as cybersecurity experts and human resources experts attest that the use of AI in the context of job applications is widespread. Not only that, much of this use is currently problematic or fraudulent, with some bad actors even using AI to provide real-time interview assistance, create fake identities, and generate live deepfake streams. These activities were most notorious in the context of North Korean hackers brokering jobs with European companies, but the issue resurfaced in the United States in March.
While there is no single tool or method that HR departments can reliably detect the use of AI, there are various steps that can be taken during the application process to significantly reduce the likelihood of deception by fraudulent candidates. And when combined into a package of AI anti-deception solutions, organizations can reduce risk to a minimal level.
Scope of AI use in job applications
The use of AI in job applications is now so common that companies can have a hard time distinguishing between legal and illegal uses. “The use of AI in applications is diverse and the line between aid and fraud is really blurry,” said Shahak Shareef, Global Head of Fraud and AI Research at Malwarebytes.
Shalev says narrator Part of this is the prevalence of “AI-generated application spam,” defined by canned resumes and cover letters sent by automated tools that deliver “hundreds” of applications per day. This may already be a good enough question for most prospective employers, but then we run into further fraudulent uses, such as real-time interview assistance, where candidates secretly run LLMs that provide answers to their questions.
“The use of AI in applications is wide-ranging, and the line between assistance and fraud is completely blurred.”
Shahak Shalev, Global Head of Fraud and AI Research at Malwarebytes, said:
But at the other end of the spectrum, recruiters are also witnessing outright fraud. This is a concern from a security perspective, not an employee quality perspective. “That means synthetic identities with AI-generated mugshots, fabricated LinkedIn histories, and live deepfake videos included in the interviews themselves,” Shalev said, adding that the toolkit used by North Korean “IT workers” is now “used by ordinary criminals” to take advantage of its low cost and relative ease of use.
According to Bart Lautenbach, senior vice president and general manager of talent solutions at Equifax, fraudulent use of problematic AI in the job application process is “increasingly common.” As an example, he says, more than 100 American companies are having problems with North Korean scammers and hackers. narrator Equifax clients have also seen malicious actors use “synthetic identities” to “pass through” application processes.
“This problem extends from the early recruitment stage to the onboarding process, where people are increasingly using AI-powered forged documents to prove their eligibility to work in the United States,” he says. “Beyond simply securing employment, these synthetic identities can potentially infiltrate corporate systems for more dangerous purposes, such as large-scale data theft or leaking sensitive internal information.”
Such risks are not hypothetical, as North Korea’s IT operations program had access to International Traffic in Arms Regulations (ITAR) data from a California-based defense contractor, according to the U.S. Department of Justice. This technical data, which was legally restricted under the ITAR framework, was downloaded by foreign conspirators, and the entire scheme generated $5 million in revenue for the Democratic People’s Republic of Korea (DPRK).
Will it get worse before it gets better?
North Korea is often cited as a major source of fraudulent job applications, according to security firm Pindrop. narrator Some IP addresses can also be traced back to Russia. In any case, many fake worker schemes are surprisingly sophisticated, with Kristin Kazubski-Aldrich, chief human resources officer at Pindrop, reporting that she is “increasingly” encountering candidates using deepfake technology to manipulate their faces and voices in interviews.
“These candidates often seem trustworthy, have relevant experience and a strong LinkedIn profile, and are natural in conversation,” she says. “However, in some interviews, Pindrop’s platform flagged what initially appeared to be genuine candidates as AI-generated.”
“These candidates often seem trustworthy, have relevant experience and a strong LinkedIn profile, and are natural in conversation.”
Christine Kaszubski Aldrich, Chief Human Resources Officer, Pindrop
Kazubsky-Aldrich added that these applicants were using synthetic voices overlaid on live feeds to mislead recruiters, and Pindrop also observed candidates switching over at various stages of the application process. She explains, “This type of proxy interviewing creates a lack of consistency of identity throughout the hiring process, making it difficult to ensure teams are evaluating the same candidates end-to-end.”
Most worryingly, Pindrop’s internal recruitment data shows that 16.8% of job applicants show signs of digital manipulation, and in some cases even possible fraud. Therefore, Kazubsky-Aldrich predicts that as AI continues to evolve, the hiring process will be exposed to greater security and identity risks. Bart Lautenbach, senior vice president and general talent solutions division at Equifax, agreed with this assessment, noting that Gartner predicts that by 2028, 25% of job applications will be fake. He also revealed that a recent Equifax study found that 71% of HR professionals have encountered misleading or false candidate information. This percentage highlights how widespread the problem with AI-generated applications, and indeed AI-generated candidates, is now. It will be.
What organizations can do to eliminate fake applicants
Rautenbach argues that the rise in AI-powered misleading and fake job applications not only poses a serious security risk to organizations, but also threatens to significantly reduce the productivity of HR departments, which are “forced to manually sift through potentially thousands of fraudulent applications.” In the face of this issue, he advises companies to conduct thorough background checks to determine whether a candidate’s self-submitted profile matches independently verifiable information. He also advocates testing the interviewee’s situational knowledge and asking technical and context-specific questions about previous projects and roles.
The problem is further exacerbated by the fact that there is no “single reliable detector” of AI usage in recruiting applications, according to Shahak Shalev of Malwarebytes. He said: “AI detection tools have well-documented problems with false positives and can also create a range of impact risks under employment law.” So instead of a one-size-fits-all AI detector, which doesn’t exist, Shalev recommends using “layered friction.” This could include adding at least one real-time, unplanned interaction that is difficult to fake during an interview, from something as simple as asking a candidate to wave their hand in front of their face (something deepfake models are bad at) to shifting the topic of conversation to something off-topic or unexpected.
“For roles that require sensitive access, at least one in-person stage or verified video stage is required before a final offer,” Shalev advises. This, in addition to verifying ID documents, phone numbers, and addresses (especially if company devices need to be shipped), should require HR to work directly with IT and security when onboarding new employees.
Similarly, Christine Kaszubski Aldrich asserts that the most important change Pindrop has made is separating identity verification from candidate evaluation. We are also implementing system-level checks throughout the hiring process, including using tools to verify that the same voices are being heard at each interview stage.
“The goal is not to turn the interview into a security checkpoint,” she concludes. “This is to move identity verification into the background as much as possible.”
If you’re a recruiter, chances are you spend the majority of your workday reading cover letters and resumes created using AI. At least, that’s the conclusion you might come to after looking at recent research on the subject, including one published by CharityJobs in February. The research revealed that 64% of applicants in the UK charity sector would approve of using AI as part of the recruitment process in 2025. This is up from 52% in 2024 and very similar to the percentage found in a January study by EdTech company Kahoot! Gen Z workers based in the UK are using LLMs to write cover letters and other application materials.
These numbers are just the tip of a very large iceberg, as cybersecurity experts and human resources experts attest that the use of AI in the context of job applications is widespread. Not only that, much of this use is currently problematic or fraudulent, with some bad actors even using AI to provide real-time interview assistance, create fake identities, and generate live deepfake streams. These activities were most notorious in the context of North Korean hackers brokering jobs with European companies, but the issue resurfaced in the United States in March.
While there is no single tool or method that HR departments can reliably detect the use of AI, there are various steps that can be taken during the application process to significantly reduce the likelihood of deception by fraudulent candidates. And when combined into a package of AI anti-deception solutions, organizations can reduce risk to a minimal level.
