How to ensure safety for medical artificial intelligence

When we think of AI, we are naturally drawn to its power to transform diagnosis and treatment planning and weigh up its potential by comparing AI capabilities to those of humans. We have yet, however, to look at AI seriously through the lens of patient safety. What new risks do these technologies bring to patients, alongside their obvious potential for benefit? Further, how do we mitigate these risks once we identify them, so we can all have confidence the AI is helping and not hindering patient care?

Source: Coiera, E. (2018). How to ensure safety for medical artificial intelligence.

Enrico Coiera covers a lot of ground (albeit briefly) in this short post:

  • The prevalence of medical error as a cause of patient harm
  • The challenges and ethical concerns that are inherent in AI-based decision-making around end-of-life care
  • The importance of high-quality training data for machine learning algorithms
  • Related to this, the challenge of poor (human) practice being encoded into algorithms and so perpetuated
  • The risk of becoming overly reliant on AI-based decisions
  • Limited transferability when technological solutions are implemented in different contexts
  • The importance of starting with patient safety in algorithm decision, rather than adding it later

If you use each of the points in the summary above, there’s enough of a foundation in this article to really get to grips with some of the most interesting and challenging areas of machine learning in clinical practice. It might even be a useful guide to building an outline for a pretty comprehensive research project.

For more thoughts on developing a research agenda in related topics, see: AMA passes first policy guidelines on augmented intelligence.

Note: you should check out Enrico’s Twitter feed, which is a goldmine for cool (but appropriately restrained) ideas around machine learning in clinical practice.

Defensive Diagnostics: the legal implications of AI in radiology

Doctors are human. And humans make mistakes. And while scientific advancements have dramatically improved our ability to detect and treat illness, they have also engendered a perception of precision, exactness and infallibility. When patient expectations collide with human error, malpractice lawsuits are born. And it’s a very expensive problem.

Source: Defensive Diagnostics: the legal implications of AI in radiology

There are few things to note in this article. The first, and most obvious, was that we have a much higher standard for AI-based expert systems (i.e. algorithmic diagnosis and prediction) than we do for human experts. Our expectations for algorithmic clinical decision-making are far more exacting than those we have for physicians. It seems strange that we accept the fallibility of human beings but expect nothing less than perfection from AI-based systems. [1]

Medical errors are more frequent than anyone cares to admit. In radiology, the retrospective error rate is approximately 30% across all specialities, with real-time error rates in daily practice averaging between 3% and 5%.

The second takeaway was that one of the most significant areas of influence for AI in clinical settings may not be in the primary diagnosis but rather the follow up analysis that  highlights potential mistakes that the clinician may have made. These applications of AI for secondary diagnostic review will be cheap and won’t add any additional workload to healthcare professionals. They will simply review the clinician’s conclusion and flag those cases that may benefit from additional testing. Of course, this will probably be driven by patient litigation.


[1] Incidentally, the same principle seems to be true for self-driving cars; we expect nothing but a perfect safety record for autonomous vehicles but are quite happy with the status quo for human drivers (1.2 million traffic-related deaths in a single year). Where is the moral panic around the mass slaughter of human beings by human drivers? If an algorithm is only slightly safer than a human being behind the wheel of a car it would result in thousands fewer deaths per year. And yet it feels like we’re going to delay the introduction of autonomous cars until they meet some perfect standard. To me at least, that seems morally wrong.