Michael Rowe

Trying to get better at getting better

Don’t blame biased algorithms for outcomes you don’t like.

“What algorithms are doing is giving you a look in the mirror. They reflect the inequalities of our society.”

Sandra Wachter, in The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good. Cordliffe, J. (2019). The New York Times.

We can start by agreeing that algorithms are biased. Unfortunately, this is where most people stop and I think it’s because it presents a conclusion to a narrative that sits well with them. The narrative goes something like this: “Human beings are special and there are some things that computers will never be able to replicate.” The “algorithms-are-biased” conclusion helps to support that narrative because it’s a reason for why we shouldn’t delegate responsibility to them.

The thing is, algorithms are biased because they reflect the bias inherent to the data they’re trained on. Briefly, machine learning algorithms are trained on massive data sets that are generated by human beings. Sometimes the data sets are collated and labeled by people, and sometimes they’re generated through our interactions in the world. Either way, our implicit biases are encoded within those data sets and it is these biases that are reflected in the outcomes generated by the algorithms.

What’s great is that the bias in AI-based systems is often explicit (i.e. we can see it), which means that we can act on it and improve the outputs. Contrast this with human beings who are often not even aware of the cognitive biases that nudge us towards predetermined outcomes. And if we can’t even recognise it in ourselves how are we possibly going to reduce it’s influence. Working in groups – especially diverse groups – means that there may be others who can hold us to account and help us to recognise our biases. But even working in groups is no guarantee that we’ll avoid the trap of following our preconceived notions of what we think ought to happen. And we often make decisions alone, which means it’s even harder to recognise. Even when we want to do the right thing we may not recognise when we’re not.

One reason to be optimistic about algorithmic bias is that it’s relatively easy to correct. Once we see the bias in an algorithm we can make changes to it at different points in the system, from being more careful about gathering representative samples of data on which to train the algorithms, to modifying the software itself. And once that algorithm is less biased then everything it touches is also less biased. Try doing that with even a handful of people.

Here’s the thing that few people seem willing to confront: algorithms are biased because we are biased. But it’s so much easier to say that we can’t trust machine learning because it’s biased than to acknowledge that machine learning is simpy making explicit the biases that we don’t want to see in ourselves. And this is one reason why finding biases in algorithms is a Good Thing. Because we can make the algorithm do better by holding it to a higher standard than we’re capable of doing to ourselves.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.