I keep seeing articles reporting that AI is biased, and exhorting software developers and private companies to do more to ensure that algorithmic bias is stamped out. But I don’t see any of those people asking for human bias to be stamped out. Which is weird, because human bias is the ultimate cause of algorithmic bias.
AI models are trained on data generated by human beings, which means that when the model output is determined to be racist and sexist, it’s because human beings are racist and sexist. The model output is a symptom of how we behave towards each other.
I’ve taken a few headlines that have come across my feed recently, and rewritten them so that they’re more accurate.
- ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies.
- ‘There is no standard’: investigation finds society objectifies women’s bodies.
- Millions of black people affected by racial bias in health-care algorithms.
- Millions of black people affected by racial bias in health-care organisations.
- AI programs exhibit racial and gender biases, research reveals.
- Human beings exhibit racial and gender biases, research reveals.
When we tell AI model developers to change how they build software to remove the inherent bias, what we’re actually asking them to do is hide our own bias. It’s as if we don’t want to be reminded that this is who we are. If an AI model does some linguistic judo on the back-end and removes any trace of bias in the model output, it will do more to make human bias invisible and thus harder to address. If algorithmic bias is the reflection we see in the mirror of AI, we are collectively responsible for that.
You could argue that this would let AI developers off the hook, but I’d argue that it’s not their hook. It’s society’s. If we want our algorithms to be less biased, we need to be less biased. And of course this is what we want; we absolutely want a less biased, less sexist, less ageist, society. And we want our AI systems to be reflective of that society.
I don’t think it’s the responsibility of software engineers to remove bias from algorithms. I think it’s our collective social responsibility to actually be less biased towards each other. But this would mean we’d need to change our behaviours and our beliefs.
And there’s just not enough evidence that we care enough about the problem to do this. It’s much easier to simply point the finger at software companies and make it their problem.