I get that this this is probably a controversial position to take but I’m going to suggest that bias in machine learning (ML) models may be a Good Thing.
But first, I need to ask – again – why everyone piles onto bias in machine learning but they’re quite content to give human beings a pass?
Maybe it’s something to do with scale; ML bias has the potential to affect many more people. For example, a prediction algorithm that scores patients on whether or not they should receive a disability grant may have far-reaching effects on tens of thousands (millions?) of people. But human beings currently make those decisions and I’ve heard way too many clinicians talk about “lazy and entitled” patients to believe that we’re not biased.
Another argument might be that the outcomes of ML bias are high-stakes and that’s why we can’t leave them to algorithms. For example, a model suggesting that certain patients are unlikely to benefit from ventilation might mean that those patients die (although it also means that someone else gets the ventilator, so maybe that’s not a good example). But again, (biased) human beings have been making those calls for… . Well, forever. We’re the ones who decide who lives and who dies and we know that our beliefs influence our behaviour.
So, I’m not convinced by the arguments I’ve seen for why ML bias is worse than human bias. At least ML bias can be changed. It’s massively complex and it’s going to take time but at least it’s a tractable problem, unlike the project of trying to change human bias, especially on a societal level.
The bias we’re seeing in ML models is really the bias inherent in the data those models are trained on. No-one sets out to create an algorithm that denies financial support to someone who lives in a certain area code. ML models learn human preferences based on patterns they see in the data.They’re our biases, reflected back to us. Clear. And quantifiable.
Machine learning bias is a more objective measure of something we all know is true; that certain populations don’t have access to justice, or education, or social benefits, or decent housing. And when it’s presented to us as obviously as it is in algorithmic outputs it’s hard for anyone to make the argument that this bias is not real.
Biased ML models are demonstrably clear examples of the areas in society that we need to change. Bias in ML models gives us targets to aim at.
And that’s a Good Thing.