Want Less-Biased Decisions? Use Algorithms.

At the heart of this work is the concern that algorithms are often opaque, biased, and unaccountable tools being wielded in the interests of institutional power. So how worried should we be about the modern ascendance of algorithms?

These critiques and investigations are often insightful and illuminating, and they have done a good job in disabusing us of the notion that algorithms are purely objective. But there is a pattern among these critics, which is that they rarely ask how well the systems they analyze would operate without algorithms. And that is the most relevant question for practitioners and policy makers: How do the bias and performance of algorithms compare with the status quo? Rather than simply asking whether algorithms are flawed, we should be asking how these flaws compare with those of human beings.

Source: Miller, P. (2018). Want Less-Biased Decisions? Use Algorithms.

From where I’m standing this isn’t even news. Anyone who has worked with other human beings has first-hand experience of our ability to make bad choices. In retrospect, we look back at those decisions and wonder how it was possible for anyone to be so blind as to what was obviously an awful decision. And, we’re predictable in how consistently we make bad choices. To think that there is something special about human intelligence is to willfully ignore the evidence.

In all the examples mentioned…, the humans who used to make decisions were so remarkably bad that replacing them with algorithms both increased accuracy and reduced institutional biases.

Yes, algorithms are biased but they aren’t any more biased than human beings. In fact, the evidence seems to show that they are less biased, more accurate and faster to reach conclusions than we are. There’s nothing special about having a human in the decision-making loop and sometimes I  wonder this requirement will simply add more noise to the system. Whereas an algorithm will be able to support its decision with a direct link back to the data, we’ll never really know what informs human-derived conclusions. We’re probably moving towards a future where trust in machines will be the norm, and this is going to have implications for how we prepare future healthcare professionals for clinical decision-making.