In this opinion piece, MIT Sloan Professor Emilio J. Castilla argues that:
- Algorithms promise objectivity, but they are well trained in human bias in hiring.
- Until we create fairer systems for defining and rewarding talent, algorithms will only reflect inequalities and inequities that we have yet to correct.
- The AI adoption revolution doesn’t have to be a story of automated bias. Asking tough questions before automating recruitment and selection can lead to a fairer system.
In my classroom at MIT Sloan, I often ask executives and MBAs: “Does anyone believe that AI can eliminate bias and inequity in hiring?” Most hands go up. So when you show them the data, their optimism disappears.
As an example, Amazon was forced to retire its AI-powered recruiting tool after it was found to penalize resumes that included the word “female,” such as “captain of women's chess club” or “women's university.”
Another case: HireVue's speech recognition algorithm is used by more than 700 companies, including Goldman Sachs and Unilever, and is designed to assess candidates' conversational English proficiency. However, research found that these algorithms disadvantaged non-white and deaf applicants.
These are not isolated incidents. These are warnings, especially considering that the market for AI screening tools in recruitment is expected to exceed $1 billion by 2027, and an estimated 87% of companies already have these systems in place.
The appeal is obvious. The promise of faster reviews, lower costs, and unbiased hiring decisions. But the reality is more complex and far more troubling.
Problem: Bad data
AI tools don't work alone. They learn from existing data. Data can be incomplete, poorly coded, or shaped by decades of exclusion and inequality. If you feed this data into a machine, the results will be unfair. They represent massive bias and inefficiency.
Some AI tools downgraded the resumes of graduates of historically black colleges and women’s colleges. Because these schools have not traditionally fed into the white-collar pipeline. Some companies also penalize candidates who have job gaps, penalizing parents, especially mothers, who have put their careers on hold to care for their children. What appears to be an objective evaluation is actually a reproduction of old biases, stereotypes, and other hiring mistakes, now stamped with data science authority.
Beware of the “neutral aura”
this is, The paradox of algorithmic meritocracy. Training an AI system based on past hiring decisions (who passed the first round, who was interviewed, who was hired, who was promoted) does not necessarily teach fairness. However, you will learn patterns that appear to have been formed by incorrect human assumptions.
And since these systems are touted as “data-driven,” it becomes difficult to challenge their decisions. The manager's judgment may be called into question. The algorithmic rankings arrive with a neutral tone. We are teaching AI tools that they can perpetuate all the mistakes, biases, and lazy assumptions that have shaped generations of bad decisions.
AI Executive Academy
Face-to-face at MIT Sloan
Register now
Check the prerequisites first
In my 2025 book, The Paradox of Meritocracy, I say: I argue that when organizations invoke meritocracy without addressing structural challenges, they risk deepening the very gaps they seek to close. The same goes for AI. Before letting AI automate hiring decisions, we need to carefully examine the data and assumptions encoded in these systems.
So before you automate candidate recruitment and selection, you need to ask the tough question: “What data are we encoding?” What processes are these algorithms built on? And are they still relevant to the organization's needs? Who defines merit? Whose career paths are rewarded or ignored?
The problem of bias and inefficiency in recruitment is not a technology problem, so AI will not solve it. It's human. Until we create fairer systems for defining and rewarding talent, algorithms will only reflect inequalities and inequities that we have yet to correct.
AI is the tipping point
The AI adoption revolution doesn’t have to be a story of automated bias and inequity. That could be a turning point. With the promise of employment opportunities for all, there is an opportunity to reconfigure the way organizations define, evaluate and reward talent. But to do that, we need to be humble about what algorithms can and cannot do. Rather than using AI to avoid difficult questions, we should use it to uncover where our assumptions are falling short and identify and target pain points in our talent management strategies.
This means engaging in ongoing monitoring to uncover inequities and inefficiencies, rather than implementing one-time fixes. If we fail to address these issues, the promise of “unbiased” AI will remain just that: a promise. And yesterday's biases and stereotypes are quietly shaping tomorrow's workforce, one resume at a time.
Read next: Why talent management strategies fail and how to fix them
Emilio J. Castilla He is the Sloan Professor of Work and Organization Studies at the Massachusetts Institute of Technology. MIT Institute for Work and EmploymentHe is also the author of “.The meritocracy paradox: Where talent management strategies go wrong and how to fix them.” (Columbia University Press, 2025). Castile's research focuses on the organizational and social aspects of work and employment, with an emphasis on recruitment, hiring, development, and career management, as well as the impact of teamwork and social relationships on organizational performance and innovation. Recent research includes: The role of employee voice in AI implementation success and inspection The influence of gender-specific language in job information.
