Source: Coiera, E. (2018). The fate of medicine in the time of AI.

The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. [1]

This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.

Coiera offers the following suggestions for how clinical education should probably be adapted:

  • Include a solid foundation in the statistical and psychological science of clinical reasoning.
  • Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
  • Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
  • Clinicians must be active participants in the development of new models of care that will become possible with AI.

We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.

The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.

Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.

[1] The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

%d bloggers like this: