Using online multimedia to teach practical skills

During 2016 I supervised an undergraduate research group in my department and we looked at the possibility of using multimedia – including video, images and text – to teach students practical skills. Traditionally, we teach these skills by having the lecturer demonstrate the technique on a model while the class watches. Students then break into small groups to practice while the lecturer moves around class, giving feedback, correcting positions and answering questions.

This process was pretty much the only option for as long as we’ve been teaching practical techniques, but it has it’s disadvantages:

  • As class sizes have grown, it’s increasingly difficult for every student to get a good view of the technique. Imagine 60 students crowded around a plinth trying to see what the lecturer is demonstrating.
  • Each student only gets one perspective of the technique. If you’re standing at the head of the module (maybe 1 or two rows back) and the demonstration is happening at the feet, you’re not going to get any other angle.
  • There are only so many times that the technique will be demonstrated before students need to begin practising. If you’re lucky the lecturer will come around to your station and offer a few more demonstrations, but owing to the class size, this isn’t always the case.

We decided that we’d try and teach a practical technique to half the class using only a webpage. The page included two videos of the technique, step by step instructions and images. We randomly selected half the class to go through the normal process of observing the lecturer demonstrate the technique and half the class were taken to another venue,  given the URL of the webpage and asked to practice among themselves. Two weeks later we tested the students using an OSCE. Students were evaluated by two researchers using a scoring rubric developed by the lecturer, where both assessors were blinded to which students had learned the technique using the webpage.

We found that the students who only had access to the multimedia and no input from the lecturer performed better in the OSCE than the students who had observed the lecturer. This wasn’t very surprising when you consider the many advantages that video has over face-to-face demonstration (rewind, pause, watch later, etc.) but nonetheless caused a stir in the department when the students presented their findings. We had to be careful how we framed the findings so as not to suggest that this could be considered as a replacement but rather as a complement to the traditional approach.

There were several limitations to the study:

  • The sample size was very small (only 9 students from the “multimedia” class took the OSCE, as it was voluntary)
  • We have no idea whether students in the multimedia class asked students from the “traditional” class to demonstrate the technique for them
  • We only taught and tested one technique, and it wasn’t a complex technique
  • Students knew that we were doing some research and that this was a low stakes situation (i.e. they may not have paid much attention in either class since they knew it would not affect their final grades)

Even taking the above into consideration though, in principle I’m comfortable saying that the use of video, text and images to teach undergraduate students uncomplicated practical techniques is a reasonable approach. Instead of being defensive and worrying about being replaced by a video, lecturers could see this as an opportunity to move tedious, repetitive tasks outside the classroom, freeing up time in the classroom for more meaningful discussion; Why this technique and not this one? Why now? At what level? For which patients? It seems to me that the more simple, content-based work we can move out of the classroom, the more time we have with students to engage in deeper work. Wouldn’t that be a good thing?

Stop curating content for students

There’s no point in spending any time curating content for students. Think of all the time you spend searching for, filtering, aggregating, and collating content for students. Then the time you need to spend keeping that list updated. Every year there’ll be new resources available, which means you need to start comparing what you have with what is new and pruning the list accordingly. All of this is done with the best of intentions; helping students spend less time on “admin” and more time on learning. But, what if the admin is actually a really important part of the learning?

As far as I can tell, there are two main approaches to curating content for students:

  • You can aggregate information from other people, which is easier and quicker but it means 1) you have to keep up to date with what everyone else is doing, and 2) the information is unlikely to be exactly what your students need.
  • You can create your own content using a variety of other sources, which is arguably better for your students (e.g. it’s context-specific) but it has a significant workload implication.

In both of the above cases, you are responsible for keeping the resources up to date for the foreseeable future. What is the long-term sustainability of this? In 5 years time will you still be aggregating content for your students? This approach – whether you’re finding other people’s content or creating your own –  is only reasonable in a context of information scarcity. When it’s hard to find the appropriate content then it makes sense to point students in the right direction by curating a list. But we’re not in a context of information scarcity anymore and collecting words no longer has the same value as it used to.

I think it’s far more useful to teach students how to find the information they need at the time that it’s needed. This is how you prepare them for the future. This is how they learn what to do when there’s no-one there telling them what to do. It’s the difference between you telling students what is important and teaching them how to make their own choices about what is important. The first (curating content) creates a context where students are dependent, obedient, and under control. The second helps them learn how to be independent and personally empowered. So maybe we should stop finding and presenting the information that (we think) students need, and instead teach them how to find what they need, when they need it.

The incentives to create effective teams are all wrong

I just finished a meeting where I realised that the incentives provided for academics are all wrong (if you assume that having an effective department is a goal). If we want departments to be excellent (however you define excellence) we must accept that they can only get to that point if the staff work together as a team. However, academics are not incentivised to work as a team within those departments unless they happen to all be working on the same research project. While it’s true that academics are expected to work on larger projects in larger teams as they progress through the system, those projects and teams are typically not within the same department, or even institution.

The reason for this is that we have to keep expanding our sphere of influence, looking to work with colleagues from other institutions and then in other countries. As I grow as an academic all of the reward structures direct me to look for collaborative opportunities outside of my home department. If I ever actually manage to develop a high performing, excellent team in my own department, there is no way for me to be rewarded or even recognised in any meaningful way for that. OK, maybe I can tick the “Administration” box with a really big tick but there’s no way it’s going to give me an edge over someone else who is working on an international collaboration. All things being equal, “internationally recognised researcher” trumps “has developed a culture of excellence in home department”. And yet, many of the problems we experience in higher education can be traced back to poor / weak learning cultures within departments.

The more I see myself and my colleagues progress in our academic careers (through promotions and attaining higher degrees), the more I see the institution pressurising us to look beyond our own departments. This has implications in that we have fewer people committing to the responsibilities that are necessary for departments and faculties to run effectively. We need to coerce (sorry, encourage) each other to accept seats on committees because the time we spend on committees is time that we’re not working on a collaborative proposal. And even though the criteria for promotion does include wording to the effect of “Participates actively in faculty committees“, I doubt that my lack of engagement on those committees is going to impact my promotion, when I’m working on international projects and publishing fairly regularly.

I worry that the pressure from the institution on “senior” academics to increase their sphere of influence is going to have the following (unintentional) side effects on departments:

  • A reduced emphasis on the success of individual departments (because individual academics are rewarded on the basis of their collaboration outside their departments)
  • A lack of attention being paid to the undergraduate curriculum (because postgraduate throughput leads to income generation and publication)
  • Fewer staff willing to participate in department and faculty committees (because it takes time away from what really matters i.e. research)
  • Allocation of first year modules to staff with the least experience, when the reality is that our best teachers should focus their attention on the newest cohorts. But in fact, we are seeing a withdrawal of experienced staff from the undergraduate curriculum entirely (because experienced staff can’t afford to devote time to a process that won’t advance their careers i.e. undergraduate teaching)
  • Departmental processes gradually dissolving until the department limps along, with everyone doing the minimum necessary to avoid completely closing down (because “being part of an excellent department” doesn’t fit anywhere on my CV)

I’m sure that there are more but this is how far I’ve gotten in the time I allocated for this post. I don’t know what the answer is. We want our staff to progress in their careers but that progression comes with pressure – through the institutional incentives – to spend less time on ensuring that the department functions as a high performing team. In reality, departments just need to get by because as long as the wheels keep turning and the department doesn’t actually fall apart, there is no incentive on academics to build the internal relationships that allow for excellent teams to develop.

Virtual reality in clinical education: A research project outline

I was lucky enough to spend some time chatting with Ben Ellis from Oxford Brookes University, about the possibilities of using VR for clinical education. A decade ago virtual reality was something that only the military and high end research labs could afford. But recently, thanks to initiatives like Google’s Cardboard, Daydream and Jump, pretty good VR experiences can be created and shared for relatively low cost. The purpose of this post is to – very briefly – explore what a VR research project in clinical education might look like.

Google's Jump camera rig.
Google’s Jump camera rig.

Establish a clinical / educational problem that is difficult to address in a traditional educational context. There are many examples but the one I always think about is the undergraduate student who is working with a patient who goes into cardiac arrest. That’s a situation we can’t plan for and that no amount of theoretical study will prepare the student for. A less extreme example might be the novice student who goes into the ICU for the first time.

Highlight the learning context. I would take this in the direction that learning in these situations is about exploring the emotional response that students experience when exposed to traumatic or at least, difficult, clinical encounters. Imagine debriefing a student after a variety of controlled exposures to very challenging clinical experiences. For example, what possibilities exist for designing those experiences to introduce students into situations where they may be morally compromised?

Describe how virtual reality can be used to work on the problem. There’s enough literature to show that exposure to situations that look and sound real (i.e. have high fidelity) can lead to a visceral response from students. We could create scenarios that are impossible to plan for in the real world, and then work with students in those controlled contexts to help them learn how to respond later.

Create the VR experiences using relatively low cost gear e.g. Google’s Jump camera rig. The research proposal would budget for buying the cameras needed to create the experiences. We’d collaboratively design the experiences across departments in different countries, so that the experiences students are exposed to could be quite diverse in nature. With 2-3 camera rigs we could probably put together a small library of experiences from several different placements.

Run the project. Expose students from a variety of different departments to those simulated clinical encounters and conduct debriefing sessions afterwards. Record the sessions (obviously we’d have consent, etc. since this would be a registered research protocol) and conduct analysis on the transcriptions. Share the outcomes and responses between the collaborating institutions.

Use the interpreted data to develop a model of engagement in these contexts. Prepare a worksheet – or something like that – to enable others to prepare students in advance, guide the debriefing, etc. Publish the models on an open access repository (e.g. Physiopedia), along with the VR experiences themselves, allowing anyone with a phone to go through the same experiences.

OK, so it’s not complete and there are probably a ton of problems with the idea so far, but I wanted to get it out there as a base to work from. If you’re interested in the potential of VR in clinical education, please get in touch.

Teaching, learning and risk

I’ve had these ideas bouncing around in my head for a week or so and finally have a few minutes to try and get them out. I’ve been wondering why changing practice – in higher education and the clinical context – is so hard, and one way that I think I can make some sense out of it is to use the idea of risk.

To change anything is to take a risk where we don’t know what the outcome will be. We risk messing up something that kind-of-works-OK and replacing it with something that could be worse. To change our practice is to risk moving into spaces we might find uncomfortable. To take a risk is to make a decision that you’re OK with not knowing; to be OK with not understanding; to be OK with uncertainty. And many of us are really not OK with any of those things. And so we resist the change because when we don’t take the risk we’re choosing to be safe. I get that.

But the irony is that we ask our students to take risks every single day because to learn is to risk. Learning is partly about making yourself vulnerable by admitting – to yourself and others – that there is something you don’t know. And to be vulnerable is to risk being hurt. We expect our students to move into those uncomfortable spaces where they have take ownership of not knowing and of being uncertain.”Put your hand up if you don’t know.” To put your hand up and announce – to everyone – that you don’t have the answer is really risky.

Why is it OK for us to ask students to put themselves at risk if we’re not prepared to do the same. If my students must put their hands up and announce their ignorance, why don’t I? If change is about risk and so is learning, is it reasonable to ask if changing is about learning? And if that’s true, what does it say about those of us who resist change?

Stories, not containers: What is a course?

We think of courses as containers; containers for the outcomes, content and assessments related to a topic. Students move through the course – from one concept to another – until they get to the assessment at the end, which signals the end of the course. The course is bound in time; it has a definite beginning and end and it requires us to map out the course structure long before we meet the participants. How then, can this structure recognise the unique characteristics of individuals? Courses as containers are formalised, and standardised and ultimately, far more about compliance and conformity than creativity, ingenuity, innovation, or even mastery. There may be some administrative benefits to thinking of courses in this way but there are few benefits that are pedagogical. In other words, the course as container metaphor doesn’t enhance learning in any way.

If we want a student-centred, inquiry-based course we must disregard the course as container and come up with another way to think about courses. Lately I’ve been wondering if the course could be structured as a user-generated story; an unscripted narrative that integrates participant experience with course concepts leading to unpredictable and delightful outcomes. Instead of thinking of the course as a container – closed and inflexible – what if it was a stage upon which the process of learning could be enacted in order to tell stories? What if the course was an open space that enabled personal learning to progress in directions that we cannot anticipate. The course framework could include some things that participants would need to tell their version of the story – provocations, an audience, collaborators, basic structure – while also allowing for them to bring in their own elements – experience, knowledge, beliefs, etc.

What if a course began like a great story; with an opening scene that grabbed your attention? What if we started with a provocative context that generated a “Whoa!” moment; a cascade of questions that threatened someone’s core beliefs. This opening scene could establish a learning context where every participant realises that their understanding and practices are going to be questioned. It becomes clear that this course will not have a neat and tidy resolution, and that this is going to require a confrontation with the messiness and uncertainty of the world. Participants know, from the beginning, that this course is not for the faint of heart.

After the opening scene the course begins to unfold, allowing each participant to take a different direction. The structure of the course not only acknowledges every participants’ unique context and history, but actually aims to embrace and use it. There is an unfolding sequence of action and reflection where each participant chooses which “storyline” to follow. One might watch the embedded video while another is caught up in the patient scenario. Other participants are drawn to the poems and art section where course concepts are explored with multimedia artifacts. Yet others choose to read the research paper or the book review. Depending on where they see “the evidence” residing, participants make choices about how they wish to explore the topic.

There is therefore both controlled and uncontrolled content where the (un)structure of the course enables participants to engage with different perspectives, right from the start. Content is negotiated by the participants within the context of the course and decisions made about what is important to include. This enables the course to be built – as it unfolds – around the critical examination of concepts, hierarchies and assumptions that exist at the centre.

As participants engage with the course concepts via different media, questions are triggered which lead to the development of research queries that aim to provide information that participants need in order to build their story. These resources then become a course “reading” list (it could include videos and art) generated by participants during the course. Course content is therefore created in the moment as participants write their own stories using personal experience, concepts from the course, group conversations and the additional resources generated by other participants. They aggregate resources from multiple sources, remix these in various ways, adapt and repurpose them to suit their own needs, and then share them. The content is therefore created as it is needed. It will also be different every time the course is enacted because different participants will take the narrative in different directions, leading to different outcomes.

The course also provides the time and space for participants to step back and reflect. To “put down the book” and step outside. We need a moment where, before we can move on with the story we must first come to terms with what we’ve just learned. There are some ideas that are too big to take in at once and we need to step away to think about what they mean for us. Sometimes – when the ideas are big enough and uncomfortable enough – we need to think about whether or not we even want to to continue with the story. We need courses that are cognisant of the need to “step back” and that give participants the space they need to work with difficult ideas.

While the course itself is bound with beginning and end points (we can’t have facilitators and participants forever enrolled), the interactions and community that develop during the course could continue when it ends. The course is designed to outgrow itself and to leave space for community engagement and response that extends beyond the boundaries set for each iteration of the course. Just like stories can stay with you long after you finish the last page, so the thinking and reflections generated in the course as story continue long after the final task is completed. In fact, completing the final task doesn’t signal the end of something; instead it highlights that this is the beginning of a change in how you think about the world.

At the begining of the course as story, it is the group who collectively decide what “success” looks like and how it will be assessed at the end. Perhaps they decide that a short book will be the final product, where each participant takes the lead in developing a collaboratively created chapter, where each chapter is a topic in the course. Maybe “success” for another cohort is a website where they describe their process, including reflections, drawings, photos, video diaries and audio recordings. Maybe someone in the group composed a song that they all perform and that gets published. Maybe “success” is an exhibition at a gallery. We must remember that there are few limitations to what should be attempted in the pursuit of sustained, meaningful learning. The total number of possible ways that “success” can be determined is much higher than performance on a test, or submission of an essay. Thinking of the course as an unscripted story without a predetermined outcome helps us get to the point where it’s easier to see what those other descriptions of success might look like.

The best stories aren’t the ones that take you down a predictable and narrowly focused path. The best stories open you up to the possibility that everything you thought about something is being questioned. The best stories don’t answer all the questions and aren’t neatly wrapped up at the end. The best stories are starting points that leave you asking, “What next?”. Shouldn’t our courses do the same?

What conversation about curriculum should we be having?

There are tensions between all the relevant stakeholders in the training of health professionals, largely as a result of differences in expectations. These tensions can easily be seen between:

  1. The Department of Education and the Department of Health
  2. Academics at university and clinicians in the practice environment
  3. Government (usually rural) and private (usually urban) clinical contexts

Each of these groups (rightly) have different priorities with respect to the outcomes they value, and it’s very difficult to satisfy everyone. But what everyone seems to agree on is the nature of the conversation that we end up having. Except in very rare cases, the conversation about undergraduate health professions education almost always comes down to the acquisition of knowledge and skills; what do we want our new graduates to know and to do.

But this is the wrong conversation. In complex contexts and uncertain futures we can’t afford to focus our attention on what graduates know and do, but should rather pay attention to how they think and how they learn. Yet this is something that is almost universally absent from any conversation about the curriculum. As long as we’re talking about what content to include in the curriculum we’re missing the point that the biggest gap in our students’ repertoire when they graduate is that they don’t know how to think about learning.

Learning how to adapt to new and dynamic contexts is the most important skill that any new graduate can have, and yet this is probably the thing that we pay the least attention to.

The future of education in complex systems

This is the first draft of an Editorial I wrote for the open access African Journal of Health Professions Education, which will be coming out soon.

Health and education systems are increasingly recognised as complex adaptive systems that are characterised by high levels of uncertainty and constant change as a result of rich, non-linear interactions (Fraser & Greenhalgh, 2001; Bleakley, 2010). This means that complex systems are inherently ambiguous and uncertain, and that they lack predictable outcomes or clear boundaries. As health and education systems have become more complex and integrated at the beginning of the 21st century, it is no longer possible for single individuals – or even single disciplines – to work effectively within these systems (Frenk et al., 2010).

The problems generated by complex systems have been called wicked problems and are not simply difficult to solve, they are impossible to solve (Conklin, 2001; Ritchey, 2013). They’re “messy, devious, and they fight back when you try to deal with them.” (Ritchey, 2013). They’re the kinds of problems where different stakeholders have different frameworks for even trying to describe the problem, and where the constraints and resources necessary to work on the problem change over time (Conklin, 2001).

Wicked problems are also about people, vested interests and politics – making them very subjective, which is why they do not have stable problem formulations, pre-defined solution concepts, and why their outcomes are unpredictable (Ritchey, 2013). Even though we cannot solve wicked problems we can move them forward by learning how to adapt to change, generate new knowledge, and continue improving performance (Fraser & Greenhalgh, 2001). The uncertainty of complex systems is therefore something that we need to be comfortable with, learn to engage with, and be curious about. Wicked problems are not amenable to resolution through formal, structured methods; we must rather adapt to working within them.

The ability to drive progress in complex systems is a function of the ability to generate and connect ideas across groups and disciplines, and then implement new processes based on them. Not only do these activities take time, they are highly social as success often depends on who we work with (Jarche, 2016). In other words, teams are not only important for effective work but also for the kinds of generative, creative work that 21st century problems require. The ability to work in effective, interdisciplinary and creative teams is what we need to address the health problems of the future.

If the knowledge and skills required to work with wicked problems in complex systems are so diverse that it is impossible for a single individual or profession to make any appreciable impact, it is clear that we need teams that work across disciplinary boundaries. Therefore, interprofessional education is one possible strategy that we can follow to try and develop the requisite competencies for working within complex systems. These competencies include – among others – the ability to develop relationships, emotional intelligence, group work, communication and self-management, all of which are difficult to develop and assess within students (Knight & Page, 2007).

In fact, higher education is not at all well-positioned to help students develop the competencies that enable them to work with wicked problems in complex systems. Social learning theories that can help practitioners become more effective in non-linear, dynamic systems through inter-professionalism and shared tolerance of ambiguity are generally absent, especially in medical education (Bleakley, 2010). Adopting these approaches at the programme level in health professions education requires the kind of radical change that traditional health and education systems are highly resistant to. (Frenk et al., 2010). If we want to make any real progress in improving health and education outcomes in an increasingly complex world, we must start taking seriously the idea that radical curriculum reform is not only indicated, but required.


Bleakley, A. (2010). Blunting Occam’s razor: aligning medical education with studies of complexity. Journal of Evaluation in Clinical Practice, 16(4), 849–855.

Conklin, J. (2001). “Wicked Problems and Social Complexity.” CogNexus Institute. [Online]. Available from:

Fraser, S. W., & Greenhalgh, T. (2001). Coping with complexity: educating for capability. BMJ, 323, 799–803.

Frenk, J., Chen, L., Bhutta, Z. A., Cohen, J., Crisp, N., Evans, T., … Zurayk, H. (2010). Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. The Lancet, 376(9756), 1923–1958.

Jarche, H. (2016). valued work is not standardized.

Knight, P. T., & Page, A. (2007). The assessment of “wicked” competences: A report to the Practice-based Professional Learning Centre for excellence in teaching and learning in the Open University. Retrieved from…/460d21bd645f8.pdf

Ritchey, T. (2013). Wicked problems: Modelling Social Messes with Morphological Analysis. Acta Morphologica Generalis, Vol. 2 No. 1.

Who cares about “referencing”?

Why do we teach our students how to reference? Mendeley, EndNote, Refworks, etc. all do it for you. In my experience the emphasis for students in higher education is almost always on what the citation looks like and not on the work the citation does. When it comes to learning about referencing for students, the focus is almost always on:

  1. Plagiarism: If you don’t reference, you’re stealing.
  2. Format: If it doesn’t conform to [insert style guide], it’s wrong.

This is problematic. The first point begins with the assumption that our students are cheats and frauds. I prefer not to go into the relationship with that as a starting frame of reference. The second point is irrelevant because style guides explain exactly how to format the citation and software formats it for us.

What matters is that students understand the underlying rationale of attribution and of building on the ideas of others. I’m way more interested in talking about ideas with my students, than on where the comma goes. Instead of talking about the importance of referencing maybe we should aim to instil in students a love of ideas. Sometimes those ideas originated from someone else (citation required) and sometimes those ideas are your own. What does the world look like when we use ideas – some our own and some from others – to think differently? That seems like a more interesting conversation to have.

Public posting of marks

My university has a policy where the marks for each assessment task are posted – anonymously – on the departmental notice board. I think it goes back to a time when students were not automatically notified by email and individual notifications of grades would have been too time consuming. Now that our students get their marks as soon as they are captured in the system, I asked myself why we still bother to post the marks publicly.

I can’t think of a single reason why we should. What is the benefit of posting a list of marks where students are ranked against how others performed in the assessment? It has no value – as far as I can tell – for learning. No value for self-esteem (unless you’re performing in the higher percentile). No value for the institution or teacher. So why do we still do it?

I conducted a short poll among my final year ethics students asking them if they wanted me to continue posting their marks in public. See below for their responses.


Moving forward, I will no longer post my students marks in public nor will I publish class averages, unless specifically requested to do so. If I’m going to say that I’m assessing students against a set of criteria rather than against each other, I need to have my practice mirror this. How are students supposed to develop empathy when we constantly remind them that they’re in competition with each other?