The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure and hand the dilemma back to the humans.”
I think about clinical reasoning like this; it’s what we call the kind of probabilistic thinking where we take a bunch of – sometimes contradictory – data and try to make a decision that can have varying levels of confidence. For example, “If A, then probably D. But if A and B, then unlikely to be D. If C, then definitely not D”. Algorithms (and novice clinicians) are quite poor at this kind of reasoning, which is why they’ve traditionally not been used for clinical decision-making and ethical reasoning (and why novice clinicians tend not to handle clinical uncertainty very well). But if it turns out that machine learning algorithms are able to manage conditions of uncertainty and provide a range of options that humans can act on, given a wide variety of preferences and contexts, it may be that machines will be one step closer to doing our reasoning for us.
Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.
In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.
Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).
Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?
Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.
How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?
Abstract for a project I submitted earlier this week for ethics clearance. During 2012 – 2014 we converted one of our modules that runs in the 2nd, 3rd and 4th year levels from a lecture-based format to a case-based learning format. We are now hoping to have a closer look at whether or not the CBL approach led to any changes in teaching and learning behaviours in staff and students.
Case-based learning (CBL) is a teaching method that makes use of clinical narratives to create an authentic learning activity in which students navigate their way through complex patient scenarios. The use of CBL in a health professions undergraduate curriculum attempts to convey a multidimensional representation of the context, participants and reality of a clinical situation, allowing students to explore these concepts in the classroom. While the implementation of CBL has a sound theoretical basis, as well as a strong evidence base for use in health professions education, there are challenges in its effective use that are not easily resolved. However, if it can be shown that the approach leads to changes in teaching and learning practice, which enhance student learning, providing additional resources to resolve the challenges can be more strongly justified. This project therefore aims to determine staff members’ and students’ perceptions of CBL as a teaching method, and to find out how it influenced their teaching and learning behaviours.
This study will make use of a mixed method research design in which the experiences and perceptions of student and staff members are used to determine whether or not there was a change in their teaching and learning practice. Qualitative and quantitative data will be gathered using a survey of all students in the population, focus group discussions of students and in-depth interviews of all staff in the department. The survey will determine if the design of the CBL approach led to a change in what the students did. The focus group discussions will gather data on the nature of the changes and the underlying rationale for those changes. The interviews with lecturers will be conducted in order to delve more deeply into whether or not lecturers’ teaching behaviours changed, and again, to explore the underlying rationale of those changes.
The survey will make use of a self-developed questionnaire that will gather quantitative data using Likert scales and other closed-ended questions. The survey will be sent to all 3rd and 4th year students in the 2015 academic year. The same students will be invited to participate in the focus groups, and the researchers will make use of purposive sampling to allocate volunteers into two focus groups in each year level. All lecturers in the department (n=10) will be invited to participate in the in-depth interviews, including those who were not directly involved in the implementation of CBL. In addition, we will also invite ex-staff members who were involved in the process, as well as postgraduate students who assisted with student facilitation.
Qualitative data will be gathered during the focus groups and interviews. This data will be interpreted via the theoretical frameworks used in the design of the CBL cases. The focus group discussions and interviews will be conducted in English and recorded using a digital audio recorder. The audio files will be sent for verbatim transcription and the anonymised, transcribed documents will then be sent to participants for verification. The transcripts will be analysed thematically, coding the data into categories of emerging themes. Trustworthiness of the analysis will be determined through member checking and peer debriefing and participants will be given the opportunity to comment on whether or not the data was interpreted according to what they meant. The transcribed verbatim draft will be given to colleagues who were not involved in the study for comment.
Yesterday I attended a presentation on clinical reasoning by Professors Vanessa Burch (University of Cape Town) and Juanita Bezuidenhout (University of Stellenbosch). Here are the notes I took during the presentation.
How does CR work?
How do errors occur?
Do clinician educators contribute to errors?
Can we identify students with CR difficulties?
Can we improve CR skills?
How does CR work?
Graphical representation of the clinical reasoning process by Charlin et al. (2012).
High level CR appears to be intuitive but is really pattern recognition that happens as a result of lots of experience.
Students don’t have the illness scripts (i.e. patterns to recognise clinical presentations / clinical knowledge organised for action) and so they spend more time in System 2 reasoning, rather than system 1 reasoning (see Charlin et al, 2012). Side note: for additional detail on how pattern recognition actually works, see Stephen Pinker’s book, “How the mind works“.
Are we mindful of the complex thinking processes that make up CR, and do we expect students to be operating at the same level? Do we explicitly tell students about the CR process or expect them to “absorb it”?
We can act on illness scripts without acknowledging that they exist. This is why awareness of our behaviour (i.e. metacognition or mindfulness / reflection in action) is so important. System 2 processes act as a balance to prevent acting on patterns that are similar but not the same. This could be the basis for CR errors. See below the process from Lucchiari & Pravettoni’s cognitive balanced model that describes a conceptual scheme of diagnostic decision making.
It is also important to be aware that belief systems (i.e. cognitive biases and heuristics) exist, and that they can influence behaviour / decision making, which may lead to CR errors (Lucchiari & Pravettoni, 2012). See image below.
Novice practitioners tend to miss subtle differences in clinical presentations. Students must articulate their reasoning processes so that you can help them to link the facts (i.e. the clinical information) to the diagnosis. If the student missed the conceptual relationship between variables, they are prone to making mistakes.
Audétat et al (2012) use Fishbein’s integrative model of behaviour (and associated belief systems) to explain why managing clinical reasoning difficulties is so challenging (see below).
There is a tendency, in the clinical context, to emphasise service delivery above all else, with educational needs taking a distant second place. In other words, increase the students’ case load with little thought given to how this may impact on their learning (or the actual management of the patient). The clinical environment is therefore almost always not a very good educational environment that is conducive to learning.
Clues to identify students with CR difficulties:
Often not aware that we’re in System 2, while students are in System 1 → talking past each other because we’re in different spaces.
Clues at at the bedside:
Limited semantic transformation of patient interview. Student unable to do anything with the information at hand.
No logical clustering of complaints. The student can’t categorise like information in a clinically logical way.
No order of priority attributed to complaints. Students can’t decide what the most important problem is.
Key information not obtained during patient interview. Student doesn’t think to ask important questions → non-existent or faulty illness scripts (non-existent illness scripts are less dangerous than poorly configured ones because it’s easier to correct).
Physical examination excessively thorough or cursory. Student unable to make reasonable progress through the case.
Too many investigations ordered.
Inability to interpret results of investigations. Student unable to articulate a reasoning process, or they reason incorrectly, when confronted with a different set of variables e.g. X-ray, rather than a patient.
Strong beliefs in incorrect illness scripts can make novices see things that aren’t there e.g. seeing pneumonia on an X-ray that is clear. Belief systems are powerful drivers for behaviour.
CR errors are often left “unfixed” because trying to do it in the clinical context is too time consuming. These should be addressed later.
Other ways to see CR errors:
Discharge letters and case notes may be unstructured and lack clarity. Lack of illness scripts (or faulty ones) prevent students from linking concepts, which is evident in how they write narratives.
Too much / little time spent with the patient.
Emotional reaction to students: negative affect on the part of the patient (ask patients how they experienced the student’s management), or on the part of the clinician (there’s something about the student – that isn’t related to rudeness or some other inappropriate behaviour – that you find upsetting.
Can CR be taught?
Every clinician thinks differently.
There is no right or wrong way to think.
Diagnostic competence requires knowledge.
The challenge is to:
Organise accurate knowledge in a user-friendly way. This is about developing appropriate semantic networks / conceptual relationships.
Create rapid access routes to the knowledge. Create opportunities to access the semantic networks quickly.
Provide enough opportunities to use the pathways. Practice, practice, practice.
Avoid students thinking that they don’t know the diagnosis. Help them to move towards thinking or knowing the diagnosis.
The key to success is structured reflection. How do we get into their heads, and how do we show them what is in our heads?
Reflection must be structured because it doesn’t help for the student to keep thinking the wrong thing. It’s no good asking the student to “have another go” because they just gave it their best shot. When the student keeps guessing the wrong answer (or, even if they guess the right answer), it’s not useful.
How do we get students to “think again” (i.e. System 1 and 2 thinking) in a structured and explicit way?
Prioritise 3 possible diagnoses
Column 1: What fits the diagnosis (Yes)? This identifies if they have an illness script. Begin by removing the diagnoses that definitely don’t fit, so that they don’t continue with the faulty illness script.
Column 2: What doesn’t fit the diagnosis (No)?
Column 3: What do you still need to find out (Data needed)?
This process will help students to articulate an illness script in a structured way. The steps require that you explicitly articulate your (i.e. the clinician’s) own thinking process. Students could also write a narrative explaining their reasoning process for the different columns.
Anxiety and loss of self-esteem will cause students to crash and be unable to take in anything that you say. You must first create an environment where they can take articulate their thinking process. It’s not about giving them the answers or the facts, it’s about taking them through a reasoning process.
We cannot help students think on a case by base basis. There are too many cases. We need to help them to work this out on their own.
Audétat, M.-C., Dory, V., Nendaz, M., Vanpee, D., Pestiaux, D., Junod Perron, N., & Charlin, B. (2012). What is so difficult about managing clinical reasoning difficulties? Medical education, 46(2), 216–27.
Lucchiari, C., & Pravettoni, G. (2012). Cognitive balanced model: a conceptual scheme of diagnostic decision making. Journal of evaluation in clinical practice, 18(1), 82–8.
Clinical reasoning is hard to do, and even harder to facilitate in novice practitioners who lack the experience and patterns of thinking that enable them to establish conceptual relationships that are often non-trivial. Experienced clinicians have developed, over many years and many patients, a set of thinking patterns that influence the clinical decisions they make, and which they are often unaware of. The development of tacit knowledge and its application in the clinical context is largely done unconsciously, which is why experienced clinicians often feel like they “just know” what to do.
Developing clinical reasoning is included as part of clinical education, yet it is usually implicit. Students are expected to “do” clinical reasoning, yet we find it difficult to explain just what we mean by that. How do you model a way of thinking?
One of the starting points is to ask what we mean when we talk about clinical education. Traditionally, clinical education describes the teaching and learning experiences that happen in a clinical context, maybe a hospital, outpatient or clinic setting. However, if we redefine “clinical education” to mean activities that stimulate the patterns of thinking needed to think and behave in the real world, then “clinical education” is something that can happen anywhere, at any time.
My PhD was about exploring the possibilities for change that are made available through the integration of technology into clinical education. The main outcome of the project was the development of a set of draft design principles that emerged through a series of research projects that included students, clinicians and clinical educators. These principles can be used to design online and physical learning spaces that create opportunities for students to develop critical thinking as part of clinical reasoning. Each top-level principle is associated with a number of “facets” that further describe the application of the principles.
Here are the draft design principles (note that the supporting evidence and additional discussion are not included here):
1. Facilitate interaction through enhanced communication
Interaction can be between people and content
Communication is iterative and aims to improve understanding through structured dialogue that is conducted over time
Digital content is not inert, and can transform interactions by responding and changing over time
Content is a framework around which a process of interaction can take place – it is a means to an end, not an end in itself
When content is distributed over networks, the “learning environment” becomes all possible spaces where learning can happen
Interaction is possible in a range of contexts, and not exclusively during scheduled times
2. Require articulation
Articulation gives form and substance to abstract ideas, thereby exposing understanding
Articulation is about committing to a statement based on personal experience, that is supported by evidence
Articulation is public, making students accountable for what they believe
Articulation allows students’ thinking to be challenged or reinforced
Incomplete understanding is not a point of failure, but a normal part of moving towards understanding
3. Build relationships
Knowledge can be developed through the interaction between people, content and objects, through networks
Relationships can be built around collaborative activity where the responsibility for learning is shared
Facilitators are part of the process, and students are partners in teaching and learning
Facilitators are not gatekeepers – they are locksmiths
Create a safe space where “not knowing” is as important as “knowing”
Teaching and learning is a dynamic, symbiotic relationship between people
Building relationships takes into account both personal and professional development
Building relationships means balancing out power so that students also have a say in when and how learning happens
4. Embrace complexity
Develop learning spaces that are more, not less, complex
Change variables within the learning space, to replicate the dynamic context of the real world
Create problems that have poorly defined boundaries and which defy simple solutions
5. Encourage creativity
Students must identify gaps in their own understanding, and engage in a process of knowledge creation to fill those gaps
These products of learning are created through an iterative activity that includes interaction through discussion and feedback
Learning materials created should be shared with others throughout the process, to enable interaction around both process and product
Processes of content development should be structured according to the ability of the students
6. Stimulate reflection
Learning activities should have reflection built in
Completing the reflection should have a real consequence for the student
Reflection should be modelled for students
Reflections should be shared with others
Feedback on reflections should be provided as soon after the experience as possible
Students need to determine the value of reflection for themselves, it cannot be told to them
7. Acknowledge emotion
Create a safe, nonjudgemental space for students to share their personal experiences and thoughts, as well as their emotional responses to those experiences
Facilitators should validate students’ emotional responses
These shared experiences can inform valuable teaching moments
Facilitators are encouraged to share personal values and their own emotional responses to clinical encounters, normalising and scaffolding the process
Sensitive topics should be covered in facetoface sessions
Facilitators’ emotional responses to teaching and learning should be acknowledged, as well their emotional responses to the clinical context
The learning environment should be flexible enough to adapt to the changing needs of students, but structured enough to scaffold their progress
The components of the curriculum (i.e. the teaching strategies, assessment tasks and content) should be flexible and should change when necessary
Facilitators should be flexible, changing schedules and approaches to better serve students’ learning
Tasks and activities should be “cognitively real”, enabling students to immerse themselves to the extent that they think and behave as they would be expected to in the real world
Tasks and activities should use the “tools” of the profession to expose students to the culture of the profession
Technology should be transparent, adding to, and not distracting from the immersive experience
We have implemented these draft design principles as part of a blended module that made significant use of technology to fundamentally change teaching and learning practices in our physiotherapy department. We’re currently seeing very positive changes in students’ learning behaviours, and clinical reasoning while on placements, although the real benefits of this approach will only really emerge in the next year or so. I will continue to update these principles as I continue my research.
Note: The thesis is still under examination, and these design principles are still very much in draft. They have not been tested in any context other than in our department and will be undergoing refinement as I continue doing postdoctoral work in this area.
I just had a brief conversation with a colleague on the nature of the teaching method we’re using in my department. Earlier this year we shifted from a methodology premised on lectures, to the use of case-based learning. I’ve been saying for a while that content is not important, but I’ve realised that I haven’t been adding the most important part, which is that content is not important, relative to thinking.
Of course content is important, but we often forget why it’s important. Content doesn’t help students to manage patients (not much anyway). The example I often use is that a student can know many facts about TB, including, for example, its pathology. But, that won’t necessarily help them to manage a patient who has decreased air entry because of the TB.
What will help the student is the ability to link data obtained from the medical folder, patient interview and physical exam, with the patients signs and symptoms. By establishing relationships between those variables, the student develops an understanding of how to proceed with the patient management process, which includes treatment. There is very little content that the student needs in order to establish those relationships. In those situations, what the content does focus on is a recipe list of commonly used assessment and treatment interventions, which the student can memorise and apply to a patient who presents in a certain way. This is NOT what we want though. This approach doesn’t help students’ adapt and respond to changing conditions.
Knowing the pathology of TB may tell the student WHY there is decreased air entry to the basal aspect of the lungs, but not WHAT TO DO about it (unless you want students to follow recipes). Clinical reasoning is the important part, not content. This is what I’ve been missing when I tell people that content isn’t important. It’s not, but only relative to thinking.
I’m happy and proud to announce that my first app has been released into the App store. I’ve been working on this project for a few months now, in collaboration with the excellent team at Snapplify, in order to get this release out the door. The name of the app is The Clinical Teacher, and it’s available for download in the app store.
The Clinical Teacher is a mobile reference app (currently only for the iPad and iPhone but soon for Android as well) aimed at clinicians, clinical supervisors and clinical educators who are interested in improving their teaching practices. The idea is to develop short summaries (5-10 pages) of concepts related to teaching and learning practice in the clinical context, integrating rich media with academic rigor. Think of the app as a library within which various articles will be published and made available for download.
Each article within the app is based on evidence and provides insight into teaching and learning strategies in the clinical context. The articles are developed from the ground up by domain experts, making use of peer-reviewed publications and open educational resources to deliver a concise summary of the topic being explored. Articles are comprehensive enough to give you a better understanding of the topic but concise enough to cover in one sitting. However, additional resources are also provided so that you can explore the topics in even more depth.
At the moment, the content is available for purchase for a minimal fee (e.g. the Peer Review of Teaching article is $0.99), although we will push out some articles for free as we move forward. We’re inviting clinical educators to consider publishing through The Clinical Teacher with the idea of developing content that is more “academic” than a blog post, but less so than a peer-reviewed publication. Apple and Snapplify both receive 30% of the cost of the article, meaning that the author receives 40% of whatever the article makes. And you get to have your content in the app store. This may change over time, depending on how much editorial and layout of articles we have to do before work can be published. If you’d like to write a short piece for The Clinical Teacher, submit your idea here.
The idea is that over time we’ll work with Snapplify to develop features in the app that move it beyond a content delivery app and integrate social features which we can use to create a community around teaching and learning practices in clinical education. But that’s for later. Right now it’s just great to see the app available after all the effort. I’d love to hear any feedback or suggestions for improvement.
A few days ago we began the second SAFRI* session of 2011, which will lead into the SAAHE conference** later in the week. Every day I take notes and will try to put them up as we go along bearing in mind that a lot of what we do is workshop-based. The notes are a combination of points given by presenters, and my own reflections that were sparked by something that someone said. My thoughts are in italics.
Achievement: changed the way I think about the world (word = clinical education)
Challenge: helping others to see the world the way I do
Never give up…or Give up often? Come up with lots of ideas, some will be good, some not so good, some terrible. Test them all (even if only mentally) and throw away the ones that don’t stand up to being tested. Analogy with digital cameras and taking loads of photos because the cost is zero and you can delete the poor ones.
Find the underlying principle that can be generalised to many contexts i.e. details aren’t necessarily important
Why did I miss the diagnosis? (Bordage, 1999) → “Less is better” i.e. foundations are good to build on
We tend to focus on student behaviour, instead of their learning e.g. “students must attend class and pay attention”…but if you’re not interesting, then why should they attend? What is it about their presence that somehow ensures that “learning happens”? If they’re not going to listen to you (and if they can pass the exam without attending), then why do we place so much emphasis on their presence?
Teach an approach to discovery, rather than a list of things
Dual processing theory (a universal model of diagnostic reasoning)
How do we reason through clinical problems?
Make observations and identify signs / variables
Query your existing database i.e. your pre-existing knowledge
Identify associations between the observed variables and your own database i.e. pattern recognition
If existing knowledge is insufficient, query an external database (e.g. patient / textbook)
If you go through the above process enough times, you “get experience” → add more patterns to your internal database
I’m sure the above process is more eloquently and comprehensively described elsewhere
“Dancing naked in the mind field” – title of a book that perfectly describes why I blog…putting my thoughts, reflections and experiences out there and by doing so, exposing myself while sharing.
Having a diagnosis frees you from having to think. This has implications for when you’re tired / stressed / pushed for time, in that in those circumstances you can’t think and so latch onto a diagnosis. Students experience the same thing when they’re looking for answers. Having the answer means they don’t have to think because thinking is hard and places a high demand on system resources.
There’s a strong emotional response / association with diagnoses that are made intuitively i.e. without an analytical reasoning process
Talking out loud externalises a reasoning process that is often obscured and hidden from the student
“Diagnostic error and clinical reasoning” (Norman & Eva, Medical Education, 2010)
“construct referenced” as it relates to feedback?
Black , P. & William, D. (1998). Assessment and classroom learning, Assessment in Education, 5, pp. 7–75
Rushton, A. (2005). Formative assessment : a key to deep learning ? Medical Teacher, 27(6), 509-513
Nofziger, A. C., Naumburg, E. H., Davis, B. J., Mooney, C. J., & Epstein, R. M. (2010). Impact of Peer Assessment on the Professional Development of Medical Students : A Qualitative Study. Academic Medicine, 85(1), 140-147
*SAFRI (Southern Africa FAIMER Regional Institute); FAIMER (Foundation for Advancement of International Medical Education and Research)
**SAAHE (South African Association of Health Educators)
On Saturday I attended a workshop at Groote Schuur hospital that had the aim of providing “…clinicians with the opportunity to improve their ability to facilitate learning in clinical practice”. Objectives included improving the understanding of theories of learning, methods of enhancing learning and assessment practices and the role of assessment in learning. I was impressed with the number of clinical educators and supervisors (about 40) who gave up their Saturdays to attend. Here are my notes:
Learning in clinical practice
How do I learn? Immersive, pulling in additional material, alternative ideas, I need to see the big picture
How do I learn best? Personal, vested interest, answering a question of relevance, application to a relevant problem, can be associated with different sensory modalities
How did I develop “expertise”? Socially, conversation, discussion, sharing, questioning, choosing to “own” something, pushed out of your comfort zone
How does learning happen? Reducing to basic principles, commitment, dedication
When last did you learn something new?
Students feel lost and disorientated when first arriving on a placement, no matter how much they prepare, they still feel unprepared
Theory is linear, it’s neat and “tight”, whereas practicals are messy and untidy. So, theory doesn’t prepare you for practice, only practice does
Students should be allowed to make mistakes, but when a patients health and well-being are at risk, mistakes are problematic. Students want to be “right” (maybe because we stress how important it is that they get it “right”). Clinical skills labs are useful to address the problem of practising and being allowed to make mistakes. But clinical skills labs are expensive
“Learning” is the process of turning information into knowledge through engagement
Learning is about making meaning
Students struggle with theoretical concepts until they have the opportunity see / feel the concept in the real world e.g. low tone, ataxia
Learning happens by linking new ideas to older, established ideas, which is why our perceptions of the world are highly individual
What do we do to develop student, as well as professional identity. The notion that students are “socialised” into the profession
Once students cross a “threshold”, the learning experience opens up to them
Students sometimes know the words, but not what they mean
Many students have trouble navigating between different professional contexts
Reducing power differentials helps students feel at ease and more comfortable with the idea of sharing ideas / themselves, you “humanise” the interaction
Students often don’t have a framework for self-evaluation i.e. they don’t know what a 3rd year should be able to do relative to a qualified practitioner. Their frame of reference is limited to themselves and a few teachers whose thinking process exists inside a black box
Correct errors gently, create a space of emotional safety, learning doesn’t happen in an emotional / financial / social / personal vacuum (in another workshop that I attended the other day, the presenter mentioned the “kind teacher”, an idea that I’ve been thinking about a lot)
Predicting the future by understanding the past allows us to look back at our practice and make long term plans for patient management
Enhancing learning in clinical situations
Why is the clinical learning situation so unique? Good place to apply theory, real world scenarios, BUT also a place that can inspire levels of fear that are not present in a classroom
We can ask students to assess their fears i.e. what are they afraid of and why. Then create an environment in which they can confront their fears and see the outcomes of their fears realised e.g. take off the cardio leads and hear the alarm go off, but also see that the patient continues breathing
Educational theories and frameworks can give students a structure for thinking, can help guide their thought processes, but do they necessarily need a deep understanding of the theory e.g. social constructivism?
Creating relationships between pathology and “normal” helps students understand dysfunction. However, this does little to help them develop a management protocol i.e. relate dysfunction to intervention
Facilitating ethical reasoning in student clinical practice. The relationship between ethical principles should be analysed in the light of their impact on the patient
In the early stages of their training, students don’t yet have the language to articulate ethical dilemmas
Feedback to students around ethical dilemmas should acknowledge the experience, but not pass judgement on any of the parties involved
Students often don’t emphasise the moral and ethical components of their practice, as they believe that technical ability is what they will be assessed on (which is true)
Assessment isn’t perfect
Use rubrics to prepare students in terms of providing a framework for their learning
Students won’t expose their weaknesses if they believe that they will be judged on them
Students must be able to act on the feedback given, which means that it must be timeous in order to be relevant