Stefan Milne (2024-06-26). ChatGPT Shows Hiring Bias Against People With Disabilities.
ChatGPT consistently ranked resumes with disability-related honors and credentials—such as the “Tom Wilson Disability Leadership Award”—lower than the same resumes without those honors and credentials, according to new research.
This is something we’ve known for a long time; language models are biased.
But that’s not why I’m sharing this post.
I want to suggest a few follow-up questions that are ignored by the headline:
- To what extent is ChatGPT’s disability bias simply reflecting widespread societal attitudes and existing (human) hiring practices??
- What is the prevalence of this kind of bias among humans?
- In this study, the AI bias as reduced with a few prompts. How likely is it that we can reduce the bias in human assessors, at all?
- Why do we hold AI systems to a higher standard than human decision-makers?
While it’s obviously important to identify and reduce the bias in AI systems, focusing solely on AI misses the bigger problem. AI systems are learning from, and reflecting back, longstanding societal prejudices against people with disabilities.
The fact that ChatGPT’s bias could be partially corrected through simple instructions raises challenging questions about how deeply rooted these same biases are in human decision-makers and what it would take to achieve similar improvements in human hiring practices.
What if the solution to reducing bias in society is to remove the human from the loop?