homehome Home chatchat Notifications


AIs show a CV racial bias and absolutely no one is surprised

Despite promises that AI would reduce racial bias, these models seem to only perpetuate it.

Mihai Andrei
November 15, 2024 @ 6:58 pm

share Share

Ever since large language models (LLMs) like ChatGPT became widely used, experts have warned that these systems could perpetuate or even worsen existing societal biases. Now, a new study has confirmed just that: AI models show a strong preference for white-associated names in hiring simulations, raising concerns about their role in perpetuating discrimination.

AI-generated image of a White and a Black job applicant
AI-generated image of a white and a black job applicant.

Two decades ago, economists conducted a landmark study where they sent out thousands of fictitious job applications to companies in Boston and Chicago. The applications were identical, except for the names — some were traditionally black-sounding, while others were white-sounding. The results were staggering: applicants with white names received 50% more callbacks.

Although the gap has narrowed over time, the bias remains. A recent study from this year involved sending out 83,000 fake job applications and found a 10% difference in callback rates. Despite the promises that AI would reduce human bias, there are signs that these models may not be living up to that expectation.

AI seems to dislike black applicants

Researchers from the University of Washington tested three cutting-edge LLMs by using over 500 job descriptions and 500 resumes. They focused on nine occupations including CEOs, teachers, accountants, and engineers.

The objective was to evaluate whether AI systems favored resumes with signals for race (black vs. white) and gender (male vs. female). They also analyzed whether these biases compounded for intersectional identities, such as black women.

The results were striking. Across three million resume-job comparisons, resumes with white-associated names were favored by the AI models in 85% of cases. In contrast, resumes with black-associated names were selected only 8.6% of the time. Although gender bias was less pronounced, male-associated names still had a slight advantage, being preferred just over 50% of the time.

Black males, in particular, were significantly disadvantaged. In some scenarios, they were completely overlooked in favor of white male candidates. Black female names fared slightly better but still faced substantial disadvantages compared to their white counterparts.

Why these biases appear

In some ways, LLMs still work as a “black box” — it’s not clear why they make some of the decisions they make. However, researchers believe they can explain at least a part of this effect.

For starters, it’s the training data. These models were trained on huge amounts of text — including internet text. This text can carry the same biases we carry as a society, and maybe even more. The models “learn” social stereotypes, in a way.

The second reason would be a frequency effect. If people in the black community have traditionally been underrepresented in some areas, the LLM could naturally perpetuate that trend, impacting selection.

Other factors may be at play as well, but it’s hard to disentangle these from the racial and gender influences.

How to eliminate the bias

At first glance, you’d say the answer is easy: just remove the name from CVs. This idea has been floating around for a while, but it may not be all that effective. The name is just one of the racial identifiers that AIs can detect. Educational institutions, locations, and even particular word choices can signal gender and racial identities. Removing the name can address a part of the problem, but only a part of it. Plus, removing names doesn’t address the root cause — the biases embedded in the language models themselves.

A Salesforce spokesperson told Geekwire that they don’t just blindly use these AI models. “Any models offered for production use go through rigorous testing for toxicity and bias before they’re released, and our AI offerings include guardrails and controls to protect customer data and prevent harmful outputs.” However, this is hard to actually verify.

A more thorough solution would be to modify the training data, adjusting algorithms to disregard specific identity markers, or debiasing embeddings. However, as the study notes, these solutions often reduce people’s identities to “same vs. different,” without acknowledging the unique challenges that marginalized groups face.

Perhaps the most challenging solution, but also potentially the most effective, is changing how we conceptualize professionalism. For instance, if certain words or phrases commonly associated with women (like “cared” or “collaborated”) are valued less by AI systems, we may need to reevaluate what we consider a “strong” resume. Language is context-dependent. Words associated with empathy or teamwork should be just as valued as those associated with leadership and assertiveness.

You should care about this

AI is poised to transform job recruitment. Tools like ChatGPT have made it easier to generate tailored job applications, while companies are increasingly using AI to screen resumes. And you’re probably already starting to see how this can be a problem.

If companies adopt these systems uncritically, they’re simply perpetuating existing biases. And often, they’re not actually hiring the best people for the job. This is both a social and a productivity problem. By replicating and even amplifying biases, AI-based resume screening tools could make it harder for certain groups to advance their careers. Ultimately, this can impact the economic and social mobility of entire communities.

In addition, these findings underscore the importance of transparent audits and regulatory oversight for AI hiring tools. It’s one thing to automate repetitive tasks, but when it comes to shaping people’s careers and livelihoods, fairness must be prioritized.

share Share

Can AI help us reduce hiring bias? It's possible, but it needs healthy human values around it

AI may promise fairer hiring, but new research shows it only reduces bias when paired with the right human judgment and diversity safeguards.

Hidden for over a century, a preserved Tasmanian Tiger head "found in a bucket" may bring the lost species back from extinction

Researchers recover vital RNA from Tasmanian tiger, pushing de-extinction closer to reality.

Island Nation Tuvalu Set to Become the First Country Lost to Climate Change. More Than 80% of the Population Apply to Relocate to Australia Under World's First 'Climate Visa'

Tuvalu will likely become the first nation to vanish because of climate change.

Archaeologists Discover 6,000 Year Old "Victory Pits" That Featured Mass Graves, Severed Limbs, and Torture

Ancient times weren't peaceful by any means.

Space Solar Panels Could Cut Europe’s Reliance on Land-Based Renewables by 80 Percent

A new study shows space solar panels could slash Europe’s energy costs by 2050.

A 5,000-Year-Old Cow Tooth Just Changed What We Know About Stonehenge

An ancient tooth reshapes what we know about the monument’s beginnings.

Astronomers See Inside The Core of a Dying Star For the First Time, Confirm How Heavy Atoms Are Made

An ‘extremely stripped supernova’ confirms the existence of a key feature of physicists’ models of how stars produce the elements that make up the Universe.

Rejoice! Walmart's Radioactive Shrimp Are Only a Little Radioactive

You could have a little radioactive shrimp as a treat. (Don't eat any more!)

Newly Found Stick Bug is Heavier Than Any Insect Ever Recorded in Australia

Bigger than a cockroach and lighter than a golf ball, a giant twig emerges from the misty mountains.

Chevy’s New Electric Truck Just Went 1,059 Miles on a Single Charge and Shattered the EV Range Record

No battery swaps, no software tweaks—yet the Silverado EV more than doubled its 493-mile range. How’s this possible?