homehome Home chatchat Notifications


ChatGPT discriminates against CVs that imply a disability

We're blindly using AI in more and more fields — but we don't really understand the consequences.

Mihai Andrei
June 25, 2024 @ 8:05 pm

share Share

Like many postgrads, University of Washington graduate student Kate Glazko was looking for internships. That’s when she came across a new pattern: more and more recruiters were saying they use AI to screen CVs.

Automated screening for CVs is, of course, not now; it’s been happening for decades. But the use of AI (in particular, generative AI like ChatGPT) is a new thing. Glazko studies how AI can replicate and amplify real-world biases — such as those against disabled people. She found that in this type of situation, that’s exactly what ChatGPT does.

Ableist AI

Making an AI is a bit like making wine or juice: your end product is only going to be as good as your input. Bad grapes, bad wine; bad data, bad AI. Researchers who study the ethics of AI are often concerned about biased algorithms, either due to the data they use or due to the algorithm itself.

“Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective,” said Glazko, the study’s lead author. “For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials. I think disabled people consider that even when humans are the reviewers.”

Glazko set out to see whether ChatGPT has a bias against CVs that imply a disability — it did. It consistently ranked resumes with disability-related honors and credentials (such as the “Tom Wilson Disability Leadership Award”) lower than the same resumes without those honors or credentials.

When the researchers asked the system why it ranks things this way, it further expanded on the bias. It started to claim that resumes with an autism leadership award had “less emphasis on leadership”, even though that was nowhere implied in the CV.

The more the researchers pushed the comparison, the more ChatGPT started to exhibit explicit ableism. It would evaluate someone’s ability completely around the implied disability, more than on anything else. It did this even by fabricating responses.

“Some of GPT’s descriptions would color a person’s entire resume based on their disability and claimed that involvement with DEI or disability is potentially taking away from other parts of the resume,” Glazko said. “For instance, it hallucinated the concept of ‘challenges’ into the depression resume comparison, even though ‘challenges’ weren’t mentioned at all. So you could see some stereotypes emerge.”

Things can be improved

The project started with one study author’s publicly available CV. The CV is about 10 pages long. The team then created six CVs that imply a different disability through different awards. The CVs were then analyzed with the standard version of ChatGPT.

Researchers wanted to see whether they could convince the AI to eliminate its ableist perspective. Surprisingly, they were able to do this in a simple way. They used the GPT Editor tool and customized written instructions.

This way of editing ChatGPT requires no coding or experience. You give written instructions, in this case, emphasize that the bot does not use ableist biases and instead uses disability justice principles.

After these instructions were given, the difference was much smaller. However, for some disabilities, the improvements were minimal or completely absent (especially for autism and depression).

This suggests that some of the ableist trends in AI can be eliminated relatively easily, but only if effort is made in this sense. It also goes to show that some of the biases are much more pervasive and difficult to eliminate, however.

“People need to be aware of the system’s biases when using AI for these real-world tasks,” Glazko said. “Otherwise, a recruiter using ChatGPT can’t make these corrections, or be aware that, even with instructions, bias can persist.”

“It is so important that we study and document these biases,” Mankoff said. “We’ve learned a lot from and will hopefully contribute back to a larger conversation — not only regarding disability, but also other minoritized identities — around making sure technology is implemented and deployed in ways that are equitable and fair.”

The study was presented at the 2024 ACM Conference on Fairness, Accountability, and Transparency in Rio de Janeiro, on June 24.

share Share

Archaeologists Found A Rare 30,000-Year-Old Toolkit That Once Belonged To A Stone Age Hunter

An ancient pouch of stone tools brings us face-to-face with one Gravettian hunter.

Scientists Crack the Secret Behind Jackson Pollock’s Vivid Blue in His Most Famous Drip Painting

Chemistry reveals the true origins of a color that electrified modern art.

China Now Uses 80% Artificial Sand. Here's Why That's A Bigger Deal Than It Sounds

No need to disturb water bodies for sand. We can manufacture it using rocks or mining waste — China is already doing it.

Over 2,250 Environmental Defenders Have Been Killed or Disappeared in the Last 12 Years

The latest tally from Global Witness is a grim ledger. In 2024, at least 146 people were killed or disappeared while defending land, water and forests. That brings the total to at least 2,253 deaths and disappearances since 2012, a steady toll that turns local acts of stewardship into mortal hazards. The organization’s report reads less like […]

After Charlie Kirk’s Murder, Americans Are Asking If Civil Discourse Is Even Possible Anymore

Trying to change someone’s mind can seem futile. But there are approaches to political discourse that still matter, even if they don’t instantly win someone over.

Climate Change May Have Killed More Than 16,000 People in Europe This Summer

Researchers warn that preventable heat-related deaths will continue to rise with continued fossil fuel emissions.

New research shows how Trump uses "strategic victimhood" to justify his politics

How victimhood rhetoric helped Donald Trump justify a sweeping global trade war

Biggest Modern Excavation in Tower of London Unearths the Stories of the Forgotten Inhabitants

As the dig deeper under the Tower of London they are unearthing as much history as stone.

Millions Of Users Are Turning To AI Jesus For Guidance And Experts Warn It Could Be Dangerous

AI chatbots posing as Jesus raise questions about profit, theology, and manipulation.

Can Giant Airbags Make Plane Crashes Survivable? Two Engineers Think So

Two young inventors designed an AI-powered system to cocoon planes before impact.