Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?


18 March 2019 Update: The Digital Doctor: Will surgeons lose their skills in the age of automation? AI Med.

Physiopedia: awesome physiotherapy reference site

I came across Physiopedia when the site creator, Rachael Lowe, followed me on Twitter.  Physiopedia is a free (to access, not edit) physiotherapy reference that has a great emphasis on being evidence based.  You must be a registered physiotherapist to get an account that enables you to contribute, which is how the site maintains quality control.  A quick overview of the articles reveals that this is indeed a high quality resource for physiotherapy clinicians, educators and students.  Perhaps the best thing about each article is not only the concise information it presents, but the reference list it provides for each article, pointing the reader to original resources.  It’s a very impressive effort.

You may wonder why I’m mentioning Physiopedia since my own site, OpenPhysio, is an attempt to be the same thing…a free physiotherapy resource for clinicians, educators and students.  There are however, some differences that I think are worth pointing out, the main one of which is the issue of licensing.  All the content published on OpenPhysio is specifically released under this Creative Commons license, which allows anyone to take that content and share, distribute and adapt the work, so long as they provide attribution to the original source, don’t make any money from it, and agree to share it under the same conditions.  I think this is an important distinction that in itself, is enough to differentiate the two projects.  Not that Physiopedia is using some heinous license, it’s just that it’s not specifically open.  The other thing that stands out immediately is the clean aesthetic and writing style of Physiopedia.

I think that there’s a lot of work that needs to be done on OpenPhysio if it’s going to participate in a field with such high quality content, but that’s the whole point isn’t it?  As long as there are people pushing this agenda, the future of free and open content is looking good.  At the end of the day, the more information that’s available for physiotherapists and students, the stronger we’ll become as a profession.

Note (06/04/09): I just received an email from Rachael stating that Physiopedia used the GFDL, a great license for promoting open content.