Michael Rowe

Trying to get better at getting better

This is going to be a long post, as it includes an expansion of the notes I took during this symposium. It’s hard to draw a bright line between the presentation content and my extended notes, so I think it’s fair to say that what’s presented below isn’t an accurate description of what was presented. Rather, it’s my perception of the topic, informed by the presentation.

TL;DR

Daniel, M., Wilson, E., Seifert, C., Durning, S. J., Holmboe, E., Rencic, J. J., Lang, V., & Torre, D. (2020). Expanding boundaries: A transtheoretical model of clinical reasoning and diagnostic error. Diagnosis, 7(3), 333–335.

Educators interested in clinical reasoning (and the errors associated with clinical reasoning) have tended to focus on micro-theories of reasoning (e.g. dual-process theory) that help us make informed guesses about what’s happening in your mind. A more inclusive approach would incorporate macro-theories, which try to explain what’s going on in the world. Examples of macro-theories include embodied cognition, educational psychology, situated cognition, and distributed cognition. Once we start looking at this continuum of expanding spheres of influence in our reasoning (what the presenters are calling a trans-theoretical model), we start to get a sense of how poorly equipped we are to manage ‘reasoning in our heads’. A series of principles are described about how we teach and assess using this trans-theoretical model, the details of which may change over time, but which seem entirely reasonable.

To be honest, there wasn’t a lot here that I was unaware of. What I found very useful was how the presenters brought together a wide range of theory and practice, in a format that was approachable.


Where we’re at right now

Traditionally, medical educators have tended to focus on what these presenters are calling micro-theories, like dual-process theory, that aim to describe what’s going on in your head when you’re reasoning. This approach to thinking about reasoning involves a consideration of illness scripts and schemas, which can be held in short-term / working memory as chunks of information. Being able to process these chunks relatively quickly is what enables experts to recognise patterns quickly.

Dual-process theory is heavily informed by Daniel Kahneman’s metaphors of System 1 and System 2 thinking (i.e. the ‘thinking fast and slow’ in the session title).

  • System 1: more intuitive and automatic, and is relatively fast.
  • System 2: more analytic and rational, and is relatively slow.

Biases are the systematic errors in thinking that affect us all. Bias and error are present in both systems, but is more common in the analytical mode of thinking (i.e. in System 2). You may intuitively think that it’s the quicker, pattern-recognising system that’s more prone to error, but I think that’s more a function of our language. We’re constantly asking kids to ‘slow down’, ‘pay attention’, and ‘think carefully’, so we think that this results in higher quality output. But, we’re really good at pattern recognition (because, evolution), so it’s actually when we slow down to think, that we make errors. Another way to look at this is to say that we’re good at pattern matching, and bad at thinking.

We try to address errors of reasoning by using cognitive debiasing (i.e. we attempt to reduce errors by forcing a shift from System 1 to System 2, based on cues that signal the potential for error), and by improving knowledge (i.e. we try to get students to ‘know’ more). Unfortunately, neither of these strategies work well, only reducing error by 1-2%. The presenters suggest that this may be because we’re focused only on what’s going on in the head, and it may be that our errors are caused by broader factors that are more environmental, relational, or contextual.

What else should we be thinking about?

This is where the macro-theories of reasoning may be useful to consider. They include:

  • Embodied cognition: Some features of cognition are informed by the body and its relationship with the environment. This means that thinking is not only a process of manipulating abstract symbols in our minds, but also in using our bodies to interact with the environment. We think with our minds and with our bodies.
  • Ecological psychology: Environments provide affordances that support thinking. However, these affordances are not inherent properties of the environment; they’re relational and dependent on the unique characteristics of individuals.
  • Situated cognition: A theory of learning suggesting that knowledge and skills are remembered better when they are learned in the same contexts in which they are expected to be applied. It argues that knowledge is situated in activities that are bound to social, cultural and physical contexts. Context matters. A lot.
  • Distributed cognition: The idea that mental representations (i.e. thoughts), which classical cognitive science perceives to be held within the individual brain, are actually distributed in socio-cultural systems that constitute the tools to think and perceive the world. It is therefore not only the brain of an individual that interprets reality, but also external artifacts, work teams made up of several people, and cultural systems (mythical, scientific, or otherwise).

I would add another concept to the list above, not mentioned by the presenters: Chalmers’ and Clark’s extended mind. This thesis is the idea that the mind does not exclusively reside in the brain or even the body, but extends into the physical world. In other words, some objects in the environment can be part of a cognitive process that functions as an extension of the mind itself. The concept suggests that ‘the mind’ includes every level of cognition, including the physical level.

A trans-theoretical model of clinical reasoning

The combination of these theories (what the presenters refer to as a trans-theoretical model) helps us to think about thinking beyond individuals, and to incorporate tools and environments into our thinking ‘platforms’ or systems. We need to move beyond individual intelligence to collective intelligence (note: it may also be useful to explore some of Jeff Hawkins‘ ideas around collective intelligence; see this excellent Lex Fridman podcast episode with Hawkins). I didn’t see any references to Hawkins’ work in the presentation, so this may not be a part of their thinking.

The presenters make the point that these larger, more inclusive cognitive systems have a greater capacity for reasoning (and maybe a lower error rate), because some tasks can be offloaded to other entities within the system. From my perspective, this isn’t even a question; AI will be the entity that we hand off many (most?) of our cognitive tasks and activities into.

I noted that the presenters emphasised the use of AI for summarisation. I agree that AI will be used for summarisation but this is a relatively low-level cognitive activity. I don’t think that this will be the most impactful contribution of AI to healthcare. I wondered if there was some concern about pushing too hard in this direction (i.e. were they worried about saying how much of healthcare will be taken over by machines), or if they really believed that AI would be limited to summarisation of medical information?

What might a trans-theoretical model of clinical reasoning look like?

I took the image below from the short explainer article that the authors published (see citation in TL;DR above).

How can we teach this trans-theoretical model of clinical reasoning?

I’m not convinced that this is something we can teach. It’s a bit like the point made earlier in the session i.e. the difference between knowing what and knowing how. I think we have to have students doing more ‘doing’ by solving real-world problems in practice (or simulations of practice). Then the question becomes, how do we pack more ‘doing’ into the curriculum, rather than more thinking? This has been my position for a long time; developing your ability to reason comes with the experience of reasoning. And, as much as we say we value this, I’m not convinced that we give students many opportunities to really reason during their training.

At this point, there was a comment about AI that would help with this. I agree, but I think my perspective was different. The only way this works is if the ‘thinking’ and ‘reasoning’ is handed off to AI, because the number of interacting variables in the increasingly complex world of healthcare are too high to track. And, not only are they too high for an individual, discipline, or team; the amount of information is too high for human teams to track and make sense of. We have to move reasoning into AI-based systems because at some point, all that we’ll be doing is adding noise to the system.

The presenters went on to share five principles that start describing what teaching this model of reasoning looks like in practice. For me, this was the weakest part of the session, as I didn’t get the impression that these principles were theoretically-informed, nor that they were strongly linked to the trans-theoretical model. In my opinion, we’ve been doing – or at least talking about doing – all of what’s in this list. So, if we’re already doing this in undergraduate training, then surely we’re already teaching in the context of the new reasoning model? Anyway, here are the principles.

  • Emphasise experiential learning. There was a suggestion that we need to move beyond clinical cases, and that simulation and virtual reality would be important. I don’t know why. If it’s because of placement shortages, then use simulation and VR to solve the placement shortage problem, but that’s not necessarily linked to the clinical reasoning problem that this model aims to address. And to my point above, we’ve known that this is important forever.
  • Foster environmental awareness by having students working, learning, and being in authentic spaces that they’ll occupy when they graduate. We should also help students prepare for variability and change. OK, but we’re doing all of this anyway, and have been for as long as I’ve been looking at this literature. If this is a suggestion for how we teach using this new model, then I’m confused about what we’ve been doing up to this point. Also, I’m not sure how ‘environmental’ awareness differs from ‘situational’ awareness, which is something we’ve been talking about for a long time.
  • Employ technology and tools effectively. There was a nod towards high-volume data that we can’t process, necessitating the need for AI-based clinical decision-support. As I touched on earlier in this post, I see the hand-off to AI-based systems as a solution to a practical problem (we don’t have the computational capacity to process this information); it’s got nothing to do with a model of reasoning. Also, is anyone trying to use technology ineffectively? Telling us we need to use tools effectively feels odd. How do you put this principle into practice, if we assume that everyone really wants to effective?
  • Foster teamwork and collaboration. Who is in your broader team? What are their roles and skillsets? What social and cultural factors could be influencing your decision-making. This seems reasonable, but again, we’ve been talking about effective team dynamics for a long time, so how is this linked to the new model? And, more importantly, I felt like the presenters missed the obvious new team member; AI-based autonomous agents. How are we going to prepare students and graduates to work with non-human team members?
  • Encourage systems-level thinking about error. This is probably the one principle in this list that I think is useful. I don’t believe that many people think about systems, and I have no doubt that this is increasingly important in the work we do. But again, I’m not convinced that humans – or teams of human beings – have the computational capacity to truly think at the level of systems. There are simply too many interacting variables.

How do we assess someone’s ability to use trans-theoretical reasoning?

The presenters suggest that assessment of reasoning using this trans-theoretical model needs to emphasise simulation and workplace-based assessment. And again, I’ll ask how this is different to what we’re already doing? There was a suggestion that we currently use MCQs to assess clinical reasoning, but that this is problematic because it only focuses on what’s going on in the head. Note that the presenters aren’t saying that MCQs don’t work for assessing clinical reasoning in individuals, only that it’s not used for assessing the reasoning ability of the larger context / platform. But, if we can use MCQs to assess what’s going on in the head of one person, is there any reason to think that we can’t use it to assess what’s going on in the collective heads of a team and their environment? I’m not saying I agree that MCQs are optimal, but if they work for an individual, why not for a group?

The presenters also note that there’s a poor correlation between standardised assessment and reasoning in context, so we need a stronger focus on authenticity when we try to assess reasoning ability. Simulation must include the ‘messy’ context of real-world scenarios. Agreed. There’s often a tendency to reduce the complexity of decision-making in simulations. Of course, sometimes we need to reduce the total amount of information that students need to process, leading to a reductionism that can take over.

There was some discussion about how we should use new technology (read, AI) to collect and integrate data across multiple contexts. Of course, AI will be used to help create complex, interactive, dynamic simulations, and then we’ll use AI to track all the observable data being generated through practice.

We should also embrace programmatic assessment and ensure that the assessment data we capture using multiple, discrete methods, is triangulated and interpreted. And, that every assessment point generates opportunities for feedback. Obviously, there’s no way that this can be done cost-effectively, without using AI-based systems, because the amount of assessment data being generated quickly becomes overwhelming.

And finally, we need a culture shift to assessment for learning, and the use of open-book assessments. But again, how is this different to the calls that have been taking place in the assessment literature for a decade? The cynic in me is wondering if the authors / presenters are simply shoe-horning the trending assessment practices into this model. I don’t disagree with any of the suggestions. In and of themselves, they all seem quite reasonable. I’m just not sure how they relate to the trans-theoretical model of reasoning. Maybe this is not fair; maybe I’m looking for parts of the model that weren’t included in the session today for practical reasons. Or maybe this is still too early.

How might this model influence your approach to teaching and assessing clinical reasoning?

To be honest, I’m not sure that much, if anything, is going to change. As I’ve said several times in this post, I’d put all of this under the general heading of ‘good practice’. I appreciate the attempt at a theoretical model that incorporates different ways of thinking about reasoning, and it’s reinforced some of my own thinking, especially around the future role of AI in clinical decision-making (although I’m pretty sure these presenters and I would disagree about exactly what that role would be).

There was a good question about how much of our professional regulatory and higher education ecosystems assume that knowledge and thinking sit inside the heads of individuals, and how much of a challenge it will be to change those systems. I can’t even meaningfully assess team-based decision-making in HPE. For example, the physiotherapy programme, institutional grade systems, and higher education accreditation system, all expect me to assign a competence level to a person, not a team. The entire ecosystem assumes that everything meaningful is taking place inside the head of a person.

There was some useful discussion around the fact that we teach students to take linear pathways through the clinical problem space, even while we know that those spaces are complex and that solutions aren’t linear. We reduce the complexity of practice, and teach them that diagnosis is algorithmic (i.e. ‘if this, then that, or something else’), when it’s really closer to fuzzy logic, rather than discrete ideas of ‘right’ and ‘wrong’. I know that this is a hard problem and that sometimes, there really is a right answer.

Even though I don’t agree with everything that was presented in this session, I found it to be thoughtful, insightful, and stimulating. If I get nothing else from attending AMEE, this session would have made it worthwhile.

Where do I think this is headed?

This question wasn’t part of the session, and is my own position after having reflected on the symposium. If you disagree completely, it’s not the fault of the presenters of this symposium.

I think that we could probably forget about the problem of clinical reasoning entirely because in time, AI-based systems will handle all the reasoning around patient assessment, treatment, and management. I think that humans will carry out the recommendations made by machines, until such time as robots have the dexterity for fine manipulation, and have earned our trust. At that point, they’ll carry out the management themselves. And the main reason that I think this is where we’ll end up is a purely practical one. Our small brains won’t be able to deal with all the health data being generated by wearables and ingestibles, and we’ll have the evidence showing that more computation leads to better patient outcomes. We’ll have a moral responsibility to hand over the bulk of patient care to machines, because they’ll be more patient-centred than we are.

The patient’s AI will engage with the hospital’s AI, and those two systems will reach a consensus about what the most reasonable outcome might look like, and what solutions to implement to achieve it. Patient’s won’t even come to hospital, except for surgical intervention, and precision medication will be 3D printed at home. Personalised apps under the patient’s control, will share data – with patient consent – to other systems (health, finance, legal, etc.) which will be analysed and interpreted in real-time. Combined with data gathered from internal sensors, why on earth would we care what doctors think?

I know that the conventional wisdom is that humans are still important because at the moment, we can’t encode values into algorithms, and so our replacement by machines is unlikely (I expand on the counter-argument in this rejected abstract for AMEE). However, it seems likely that machines will learn how to integrate human values into their processes. And they’ll probably do it in ways that are less biased than how we do it.

In my (admittedly controversial) opinion, we should probably be spending most of our time trying to move this scenario forward. As one of the presenters pointed out, human error may be the third leading cause of death in the US. The sooner we remove ourselves from the decision-making process, the better.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.