
Applying for jobs has changed dramatically in the past 20 years, and even in the past two years, with the emergence of AI. You can spend hours perfecting your resume, crafting a compelling cover letter, and preparing for interviews. But before any human ever sees your application, it must first pass through the digital gates of an artificial intelligence screening tool. This is the new reality of hiring for millions of people.
Companies, eager to reduce costs and manage applications quickly, have turned to AI as a potential savior. The promise is seductive: an unbiased algorithm that can sift through thousands of candidates, ignore irrelevant details like names or backgrounds, and pinpoint the perfect person for the job, all in the blink of an eye. You can tweak the selection process and pick candidates that fill various criteria, and select your team objectively.
Or can you?
A new study published in The International Journal of Human Resource Management, reveals a stark warning: simply deploying a sophisticated AI tool is not enough to reduce bias or enhance diversity in hiring. In fact, without the right human and organizational guardrails, these powerful technologies can accentuate bias rather than reduce it.
The Allure of the Algorithm
It seems that every company on the planet is flocking to AI. AI promises to make everything better and solve every problem (even problems that don’t exist) — or so we’re told. But while it’s undoubtedly a very strong technology, it’s sometimes hard to tell its true potential from the hype.
The world of human resources (HR) seems to be a particularly good fit. For popular roles, recruiters can be buried under hundreds, if not thousands, of applications. Manually reviewing each one is enormously time-consuming and also fraught with the potential for human bias.
Decades of psychological research have shown that decision-makers are susceptible to a host of unconscious biases. People tend to hire those who are like them, a phenomenon that leads to homogenous workplaces and reinforces structural discrimination against minorities.
Enter AI, with its promise of objectivity. Various tools designed to automate tedious tasks, from screening CVs and matching candidates to jobs to even analyzing voice patterns in video interviews. The argument is that an algorithm, if programmed correctly, can be less biased than a human manager. Some research has supported this, showing AI can mitigate cognitive biases and even reduce racial and ethnic disparities in hiring.
Yet, for every success story, there’s a cautionary tale. Several studies have already raised alarms about “algorithmic bias,” where AI systems, trained on historical data from biased human decisions, simply learn and amplify those same prejudices. This could entrench discrimination in a way that is harder to detect and challenge, creating a “dangerous compromise of justice, trust, and moral judgments.”
To get to the heart of the matter, the researchers designed a clever experiment.
Putting AI to the HR Test
They created a realistic, but simulated, AI hiring tool and recruited 139 participants—all with real-world hiring experience—to make 278 hiring decisions. The use of a simulated tool, a methodology known as “Wizard of Oz,” is a common practice in human-computer interaction research that allows scientists to control the variables precisely without needing a fully functional, commercial AI.
Each participant was tasked with filling two different positions in the medical field: a high-stakes job for a chief radiologist with leadership duties and a low-stakes role for a medical technical assistant. The AI tool presented them with a ranked list of five qualified candidates for each role. This shortlist was identical for everyone and included two female candidates (ranked second and fourth) and three male candidates. The top-ranked male candidate and the second-ranked female candidate were designed to be almost equally qualified, with similar experience, skills, and grades.
This setup created a clear choice: would the decision-maker follow the AI’s top ranking and choose the man, or would they reflect on other factors and choose the equally qualified woman, making a “diversity-enhancing” decision?
The results were striking. When the AI tool offered diversity-related explanations, the odds of a participant choosing a female candidate increased by 154%. When the organization provided explicit diversity guidelines, the odds jumped by 308%. And when the hiring decision was for the high-stakes, quality-sensitive job of chief radiologist, the odds of selecting a woman skyrocketed by 437% compared to the low-stakes role.
In short, the technology itself wasn’t the magic bullet. The context in which it was used—the information it provided and the organizational culture it was embedded in—made all the difference.
“We explored what conditions help AI tools to actually support more diverse hiring as we found that simply having a reliable AI tool isn’t enough to improve diversity in workplace recruitment,” she says.
“Diversity only improves when the AI system can explain its decisions in terms of diversity, when hiring focuses on qualitative goals and not just numbers, and when an organization has clear diversity guidelines,” says Associate Professor Connie Zheng, study author.
What Does This Mean?
If you’re a job seeker, it means that while algorithms are a new and powerful gatekeeper, the human element remains decisive. You can use an AI resume skills generator to pack your application with keywords you think the machine wants to see. And for passing that initial automated screen, that strategy may have some merit. But this research shows that for the roles that truly matter—the high-stakes, leadership positions—the final decision is still deeply human and contextual.
Simply put, it means your CV has to satisfy both humans and machines. This means checking the algorithmic boxes, but also focusing on clearly communicating your qualifications and how you align with the company’s deeper goals, which may very well include diversity and inclusion.
If you’re a manager or an HR professional, the implications are even more profound. The study acts as a clear guide for responsible AI implementation, but it doesn’t offer a straight line that’s simple to follow.
Simply purchasing an AI solution and expecting it to solve your diversity problems is a recipe for failure. You must create a structured framework for its use. This framework needs to be in tune with your company values and offer clear explanations about why it makes the decisions it does.
“Diversity only improves when the AI system can explain its decisions in terms of diversity, when hiring focuses on qualitative goals and not just numbers, and when an organization has clear diversity guidelines,” Zheng says.
“These factors encourage HR professionals and decision-makers to reflect more carefully on their choices. In short, AI can help improve diversity in hiring, but only when used under the right conditions and organizational support for the application of new technology, as well as clear diversity, equity and inclusion guidelines.”
Ultimately, AI is bound to have a huge impact on HR and hiring in general, and how it impacts bias will be important for companies. Researchers say that this type of study shows that AI can be an important support tool, but it doesn’t automatically lead to diversity enhancement.
“Unless the organization and its hirers are conscious about diversity and justice issues, using AI for talent acquisition isn’t going to lead to more diverse and inclusive outcomes,” Assoc Prof Zheng says.
Journal Reference: Uta Wilkens et al, Augmenting diversity in hiring decisions with artificial intelligence tools, The International Journal of Human Resource Management (2025). DOI: 10.1080/09585192.2025.2492867