Comment: How do we learn to work with intelligent machines?

I discussed something related to this earlier this year (the algorithmic de-skilling of clinicians) and thought that this short presentation added something extra. It’s not just that AI and machine learning have the potential to create scenarios in which qualified clinical experts become de-skilled over time; they will also impact on our ability to teach and learn those skills in the first place.

We’re used to the idea of a novice working closely with a more experienced clinician, and learning from them through observation and questioning (how closely this maps onto reality is a different story). When the tasks usually performed by more experienced clinicians are outsourced to algorithms, who does the novice learn from?

Will clinical supervision consist of talking undergraduate students through the algorithmic decision-making process? Discussing how probabilistic outputs were determined from limited datasets? How to interpret confidence levels of clinical decision-support systems? When clinical decisions are made by AI-based systems in the real-world of clinical practice, what will we lose in the undergraduate clinical programme, and how do we plan on addressing it?

Using online multimedia to teach practical skills

During 2016 I supervised an undergraduate research group in my department and we looked at the possibility of using multimedia – including video, images and text – to teach students practical skills. Traditionally, we teach these skills by having the lecturer demonstrate the technique on a model while the class watches. Students then break into small groups to practice while the lecturer moves around class, giving feedback, correcting positions and answering questions.

This process was pretty much the only option for as long as we’ve been teaching practical techniques, but it has it’s disadvantages:

  • As class sizes have grown, it’s increasingly difficult for every student to get a good view of the technique. Imagine 60 students crowded around a plinth trying to see what the lecturer is demonstrating.
  • Each student only gets one perspective of the technique. If you’re standing at the head of the module (maybe 1 or two rows back) and the demonstration is happening at the feet, you’re not going to get any other angle.
  • There are only so many times that the technique will be demonstrated before students need to begin practising. If you’re lucky the lecturer will come around to your station and offer a few more demonstrations, but owing to the class size, this isn’t always the case.

We decided that we’d try and teach a practical technique to half the class using only a webpage. The page included two videos of the technique, step by step instructions and images. We randomly selected half the class to go through the normal process of observing the lecturer demonstrate the technique and half the class were taken to another venue,  given the URL of the webpage and asked to practice among themselves. Two weeks later we tested the students using an OSCE. Students were evaluated by two researchers using a scoring rubric developed by the lecturer, where both assessors were blinded to which students had learned the technique using the webpage.

We found that the students who only had access to the multimedia and no input from the lecturer performed better in the OSCE than the students who had observed the lecturer. This wasn’t very surprising when you consider the many advantages that video has over face-to-face demonstration (rewind, pause, watch later, etc.) but nonetheless caused a stir in the department when the students presented their findings. We had to be careful how we framed the findings so as not to suggest that this could be considered as a replacement but rather as a complement to the traditional approach.

There were several limitations to the study:

  • The sample size was very small (only 9 students from the “multimedia” class took the OSCE, as it was voluntary)
  • We have no idea whether students in the multimedia class asked students from the “traditional” class to demonstrate the technique for them
  • We only taught and tested one technique, and it wasn’t a complex technique
  • Students knew that we were doing some research and that this was a low stakes situation (i.e. they may not have paid much attention in either class since they knew it would not affect their final grades)

Even taking the above into consideration though, in principle I’m comfortable saying that the use of video, text and images to teach undergraduate students uncomplicated practical techniques is a reasonable approach. Instead of being defensive and worrying about being replaced by a video, lecturers could see this as an opportunity to move tedious, repetitive tasks outside the classroom, freeing up time in the classroom for more meaningful discussion; Why this technique and not this one? Why now? At what level? For which patients? It seems to me that the more simple, content-based work we can move out of the classroom, the more time we have with students to engage in deeper work. Wouldn’t that be a good thing?