When we think of AI, we are naturally drawn to its power to transform diagnosis and treatment planning
and weigh up its potential by comparing AI capabilities to those of humans. We have yet, however, to look at AI seriously through the lens of patient safety. What new risks do these technologies bring to patients, alongside their obvious potential for benefit? Further, how do we mitigate these risks once we identify them, so we can all have confidence the AI is helping and not hindering patient care?
Enrico Coiera covers a lot of ground (albeit briefly) in this short post:
- The prevalence of medical error as a cause of patient harm
- The challenges and ethical concerns that are inherent in AI-based decision-making around end-of-life care
- The importance of high-quality training data for machine learning algorithms
- Related to this, the challenge of poor (human) practice being encoded into algorithms and so perpetuated
- The risk of becoming overly reliant on AI-based decisions
- Limited transferability when technological solutions are implemented in different contexts
- The importance of starting with patient safety in algorithm decision, rather than adding it later
If you use each of the points in the summary above, there’s enough of a foundation in this article to really get to grips with some of the most interesting and challenging areas of machine learning in clinical practice. It might even be a useful guide to building an outline for a pretty comprehensive research project.
For more thoughts on developing a research agenda in related topics, see: AMA passes first policy guidelines on augmented intelligence.
Note: you should check out Enrico’s Twitter feed, which is a goldmine for cool (but appropriately restrained) ideas around machine learning in clinical practice.