Emotions and assessment: considerations for rater‐based judgements of entrustment

We identify and discuss three different interpretations of the influence of raters’ emotions during assessments: (i) emotions lead to biased decision making; (ii) emotions contribute random noise to assessment, and (iii) emotions constitute legitimate sources of information that contribute to assessment decisions. We discuss these three interpretations in terms of areas for future research and implications for assessment.

Source: Gomez‐Garibello, C. and Young, M. (2018), Emotions and assessment: considerations for rater‐based judgements of entrustment. Med Educ, 52: 254-262. doi:10.1111/medu.13476

When are we going to stop thinking that assessment – of any kind – is objective? As soon as you’re making a decision (about what question to ask, the mode of response, the weighting of the item, etc.) you’re making a subjective choice about the signal you’re sending to students about what you value. If the student considers you to be a proxy of the profession/institution, then you’re subconsciously signalling the values of the profession/institution.

If you’re interested in the topic of subjectivity in assessment, you may be interested in two of our In Beta episodes:

We Need Transparency in Algorithms, But Too Much Can Backfire

The students had also been asked what grade they thought they would get, and it turned out that levels of trust in those students whose actual grades hit or exceeded that estimate were unaffected by transparency. But people whose expectations were violated – students who received lower scores than they expected – trusted the algorithm more when they got more of an explanation of how it worked. This was interesting for two reasons: it confirmed a human tendency to apply greater scrutiny to information when expectations are violated. And it showed that the distrust that might accompany negative or disappointing results can be alleviated if people believe that the underlying process is fair.

Source: We Need Transparency in Algorithms, But Too Much Can Backfire

This article uses the example of algorithmic grading of student work to discuss issues of trust and transparency. One of the findings I thought was a useful takeaway in this context is that full transparency may not be the goal, but that we should rather aim for medium transparency and only in situations where students’ expectations are not met. For example, a student who’s grade was lower than expected might need to be told something about how it was calculated. But when they got too much information it eroded trust in the algorithm completely. When students got the grade they expected then no transparency was needed at all i.e. they didn’t care how the grade was calculated.

For developers of algorithms, the article also provides a short summary of what explainable AI might look like. For example, without exposing the underlying source code, which in many cases is proprietary and holds commercial value for the company, explainable AI might simply identify the relationships between inputs and outcomes, highlight possible biases, and provide guidance that may help to address potential problems in the algorithm.

Critical digital pedagogy in the classroom: Practical implementation

Update (12-02-18): You can now download the full chapter here (A critical pedagogy for online learning in physiotherapy education) and the edited collection here.

This post is inspired by the work I’ve recently done for a book chapter, as well as several articles on Hybrid Pedagogy but in particular, Adam Heidebrink-Bruno’s Syllabus as Manifesto. I’ve been wanting to make some changes to my Professional Ethics module for a while and the past few weeks have really given me a lot to think about. Critical pedagogy is an approach to teaching and learning that not only puts the student at the centre of the classroom but then helps them to figure out what to do now that they’re there. It also pushes teachers to go beyond the default configurations of classroom spaces. Critical digital pedagogy is when we use technology to do things that are difficult or impossible in those spaces without it.

One of the first things we do in each module we teach is provide students with a course overview, or syllabus. We don’t even think about it but this document might be the first bit of insight into how we define the space we’re going to occupy with our students. How much thought do we really give to the language and structure of the document? How much of it is informed by the students’ voice? I wondered what my own syllabus would look like if I took to heart Jesse Stommel’s suggestion that we “begin by trusting students”.

I wanted to find out more about where my students come from, so I created a shared Google Doc with a very basic outline of what information needed to be included in a syllabus. I asked them to begin by anonymously sharing something about themselves that they hadn’t shared with anyone else in the class before. Something that influenced who they are and how they came to be in that class. I took what they shared, edited it and created the Preamble to our course outline, describing our group and our context. I also added my own background to the document, sharing my own values, beliefs and background, as well as positioning myself and my biases up front. I wanted to let them know that, as I ask them to share something of themselves, so will I do the same.

The next thing were the learning outcomes for the modules. We say that we want our students to take responsibility for their learning but we set up the entire programme without any input from them. We decide what they will learn based on the outcomes we define, as well as how it will be assessed. So for this syllabus I included the outcomes that we have to have and then I asked the students to each define what “success” looks like in this module for them. Each student described what they wanted to achieve by the end of the year, wrote it as a learning outcome, decided on the indicators of progress they needed, and then set timelines for completion. So each of them would have the learning outcomes that the institution and professional body requires, plus one. I think that this goes some way toward acknowledging the unique context of each student, and also gives them skills in evaluating their own development towards goals that they set that are personally meaningful.

I’ve also decided that the students will decide their own marks for these personal outcomes. At the end of the year they will evaluate their progress against the performance indicators that they have defined, and give themselves a grade that will count 10% towards their Continuous Assessment mark. This decision was inspired by this post on contract grading from HASTAC. What I’m doing isn’t exactly the same thing but it’s a similar concept in that students not only define what is important to them, but decide on the grade they earn. I’m not 100% how this will work in practice, but I’m leaning towards a shared document where students will do peer review on each other’s outcomes and progress. I’m interested to see what a student-led, student-graded, student-taught learning outcome looks like.

Something that is usually pretty concrete in any course is the content. But many concepts can actually be taught in a wide variety of ways and we just choose the ones that we’re most familiar with. For example the concept of justice (fairness) could be discussed using a history of the profession, resource allocation for patients, Apartheid in South Africa, public and private health systems, and so on. In the same shared document I asked students to suggest topics they’d like to cover in the module. I asked them to suggest the things that interest them, and I’d figure out how to teach concepts from professional ethics in those contexts. This is what they added: Income inequality. Segregation. #FeesMustFall. Can ethics be taught? The death penalty. Institutional racism. Losing a patient. That’s a pretty good range of topics that will enable me to cover quite a bit of the work in the module. It’s also more likely that students will engage considering that these are the things they’ve identified as being interesting.

Another area that we have completely covered as teachers is assessment. We decide what will be assessed, when the assessment happens, how it is graded, what formats we’ll accept…we even go so far as to tell students where to put the full stops and commas in their referencing lists. That’s a pretty deep level of control we’re exerting. I’ve been using a portfolio for assessment in this module for a few years so I’m at a point where I’m comfortable with students submitting a variety of different pieces. What I’m doing differently this year is asking the students to submit each task when it’s ready rather than for some arbitrary deadline. They get to choose when it suits them to do the work, but I have asked them to be reasonable with this, mainly because if I’m going to give them decent feedback I need time before their next piece arrives. If they’re submitted all at once, there’s no time to use the feedback to improve their next submission.

The students then decided what our “rules of engagement” would be in the classroom. Our module guides usually have some kind of prescription about what behaviour is expected, so I asked the students what they thought appropriate behaviour looks like and then to commit as a class to those rules. Unsurprisingly, their suggestions looked a lot like it would have if I had written it myself. Then I asked them to decide how to address situations when individuals contravened our rules. I don’t want to be the policeman who has to discipline students…what would it look like if students decided in advance what would work in their classroom, and then took action when necessary? I’m pretty excited to find out.

I decided that there would be no notes provided for this module, and no textbook either. I prepare the lecture outline in a shared Google document, including whatever writing assignments the students need to work on and links to open access resources that are relevant for the topic. The students take notes collaboratively in the document, which I review afterwards. I add comments and structure to their notes, and point them to additional resources. Together, we will come up with something unique describing our time together. Even if the topic is static our conversations never are, so any set of notes that focuses only on the topic is going to necessarily leave out the sometimes wonderful discussion that happens in class. This way, the students get the main ideas that are covered, but we also capture the conversation, which I can supplement afterwards.

Finally, I’ve set up a module evaluation form that is open for comment immediately and committed to having it stay open for the duration of the year. The problem with module evaluations is that we ask students to complete them at the end of the year, when they’re finished and have no opportunity to benefit from their suggestions. I wouldn’t fill it in either. This way, students get to evaluate me and the module at any time, and I get feedback that I can act on immediately. I use a simple Google Form that they can access quickly and easily, with a couple of rating scales and an option to add an open-ended comment. I’m hoping that this ongoing evaluation option in a format that is convenient for students means that they will make use of it to improve our time together.

As we worked through the document I could see students really struggling with the idea that they were being asked to contribute to the structure of the module. Even as they commented on each other’s suggestions for the module, there was an uncertainty there. It took a while for them to be comfortable saying what they wanted. Not just contributing with their physical presence in the classroom, but to really contribute in designing the module; how it would be run, how they would be assessed, how they could “be” in the classroom. I’m not sure how this is going to work out but I felt a level of enthusiasm and energy that I haven’t felt before. I felt a glimmer of something real as they started to take seriously my offer to take them seriously.

The choices above demonstrate a few very powerful additions to the other ways that we integrate technology into this module (the students portfolios are all on the IEP blog, they do collaborative authoring and peer review in Google Drive, course resources are shared in Drive, they do digital stories for one of the portfolio submissions, and occasionally we use Twitter for sharing interesting stories). It makes it very clear to the students that this is their classroom and their learning experiences. I’m a facilitator but they get to make real choices that have a real impact in the world. They get to understand and get a sense of what it feels like to have power and authority, as well as the responsibility that comes with that.

Public posting of marks

My university has a policy where the marks for each assessment task are posted – anonymously – on the departmental notice board. I think it goes back to a time when students were not automatically notified by email and individual notifications of grades would have been too time consuming. Now that our students get their marks as soon as they are captured in the system, I asked myself why we still bother to post the marks publicly.

I can’t think of a single reason why we should. What is the benefit of posting a list of marks where students are ranked against how others performed in the assessment? It has no value – as far as I can tell – for learning. No value for self-esteem (unless you’re performing in the higher percentile). No value for the institution or teacher. So why do we still do it?

I conducted a short poll among my final year ethics students asking them if they wanted me to continue posting their marks in public. See below for their responses.

selection_001

Moving forward, I will no longer post my students marks in public nor will I publish class averages, unless specifically requested to do so. If I’m going to say that I’m assessing students against a set of criteria rather than against each other, I need to have my practice mirror this. How are students supposed to develop empathy when we constantly remind them that they’re in competition with each other?

Interrogating the mistakes

We tend to focus our attention on the things that students got right. This seems perfectly appropriate at first glance because we want to celebrate what they know. Their grades are reported in such a way as to highlight the number of questions answered correctly. The cut score (pass mark) is set based on what we (often arbitrarily) decide a reasonably competent student should know (there is no basis for setting 50% as the cut score, but that’s for another post). The emphasis is always on what is known rather than what is not known.

But if you think about it getting the right answer is a bit of a dead end as far as learning is concerned. There’s nowhere to go from there. But the wrong answer opens up a whole world of possibility. If the capacity to learn and move forward sits in the spaces taken up by faulty reasoning shouldn’t we pay more attention to the errors that students make? The mistakes give us a starting point from which to proceed with learning.

What if we changed our emphasis in the curriculum to focus attention on the things that students don’t understand? Instead of celebrating the points they scored for getting the right answer could we pay closer attention to the areas where they lost marks? And not in a negative way that makes students feel inferior or stupid. I’m talking about actually celebrating the wrong answers because it gives us a starting point and a direction to move. “You got that wrong. Great! Let’s talk about it. What was the first thing you thought when you read the question? Why did you say that? Did you consider this other option? What is the logical end point of the reasoning you used? Do you see now how your answer can’t be correct?” Imagine a conversation going like that. Imagine what it would mean for students’ ability to reflect on their thinking and practice.

We might end up with some powerful shared learning experiences as we get into students’ heads as we try to understand what and how they think. The faulty reasoning that got them to the wrong answer is way more interesting than the correct reasoning that got them to the right answer. A focus on the mistakes that they make would actually help improve students ability to learn in the future because you’d be helping to correct their faulty reasoning.

But we don’t do this. We focus on counting up the the right answers and celebrating them, which means that we deflect attention from the wrong answers. We make implicit the idea that getting the right answer is important and the getting the wrong answers are bad. But learning only happens when we interrogate the faulty reasoning that got us to the wrong answer.

How my students do case studies in clinical practice

Our students do small case studies as part of their clinical practice rotations. The basic idea is that they need to identify a problem with their own practice; something that they want to improve. They describe the problem in the context of a case study which gives them a framework to approach the problem like a research project. In this post I’ll talk about the process we use for designing, implementing, drafting and grading these case studies.

There are a few things that I consider to be novel in the following approach:

  1. The case studies are about improving future clinical practice, and as such are linked to students’ practices i.e. what they do and how they think
  2. Students are the case study participants i.e. they are conducting research on themselves
  3. We shift the emphasis away from a narrow definition of “The Evidence” (i.e. journal articles) and encourage students to get creative ideas from other areas of practice
  4. The grading process has features that develop students’ knowledge and skills beyond “Conducting case study research in a clinical practice module”

Design

Early on in their clinical practice rotations, the students identify an aspect of that block that they want to learn more about. We discuss the kinds of questions they want to answer, both in class and by email. Once the topic and question are agreed, they do mini “literature” reviews (3-5 sources that may include academic journals, blogs, YouTube videos, Pinterest boards…whatever) to explore the problem as described by others. They also use the literature to identify possible solutions to their problems, which then get incorporated into the Method. They must also identify what “data” they will use to determine an improvement in their performance. They can use anything from personal reflections to grades to perceived level of comfort…anything that allows them to somehow say that their practice is getting better.

Implementation and drafting of early case studies

Then they try an intervention – on themselves, because this is about improving their own practice – and gather data to analyse as part of describing a change in practice or thinking.  They must also try to develop a general principle from the case study that they can apply to other clinical contexts. I give feedback on the initial questions and comment on early drafts to guide the projects and also give them the rubric that will be used to grade their work.

Examples of case studies from last semester include:

  • Exploring the impact of meditation and breathing techniques to lower stress before and during clinical exams, using heart rate as a proxy for stress – and learning that taking a moment to breathe can help with feeling more relaxed during an exam.
  • The challenges of communicating with a patient who has expressive aphasia – and learning that the commonly suggested alternatives are often 1) very slow, 2) frustrating, and 3) not very effective.
  • Testing their own visual estimation of ROM against a smartphone app – and learning that visual estimation is (surprise) pretty poor.
  • Exploring the impact of speaking to a patient in their own language on developing rapport – and learning that spending 30 minutes every day learning a few new Xhosa words made a huge difference to how likely the patient was to agree to physio.

Submission and peer grading

Students submit hard copies to me so that I can make sure all submissions are in. Then I take the hard copies to class and randomly assign 1 case study to each student. They pair up (Reviewer 1 and 2) and we go through the case studies together, using the rubric as a guide. I think out loud about each section of the rubric, explaining what I’m looking for in each section and why it’s important for clinical practice. For example, if we’re looking at the “Language” section I explain why clarity of expression is important for describing clinical presentations, and why conciseness allows them to practice conveying complex ideas quickly (useful for ward rounds and meetings). Spelling and grammar are important, as is legibility, to ensure that your work is clearly understandable to others in the team. I go through these rationales while the students are marking and giving feedback on the case studies in front of them.

Then they swap case studies and fill out another rubric for the case study that their team member has just completed. We go through the process again, and I encourage them to look for additional places to comment on the case study. Once that’s done they compare their rubrics for the two case studies in their team, explaining why certain marks and comments were given for certain sections. They don’t have to agree on the exact mark but they do have to come to consensus over whether each section of the work is “Poor”, “Satisfactory” or “Good”. Then they average their marks and submit it to me again.

I take all the case studies with their 2 sets of comments (on the rubric) and feedback (on the case study itself) and I go through them all myself. This means I can focus on more abstract feedback (e.g. appropriateness of the question, analysis, ethics, etc.) because the students have already commented on much of the structural, grammatical and content-related issues.

Outcomes of the process

For me, the following outcomes of the process are important to note:

  1. Students learn how to identify an area of their own clinical practice that they want to improve. It’s not us telling them what they’re doing wrong. If we want lifelong learning to happen, our students must know how to identify areas for improvement.
  2. They take definite steps towards achieving those improvements because the case study requires them to implement an intervention. “Learning” becomes synonymous with “doing” i.e. they must take concrete steps towards addressing the problem they identified.
  3. Students develop the skills they need to find answers to questions they have about their own practice. Students learn how to regulate their own learning.
  4. Each student gets 3 sets of feedback on their case study. It’s not just me – the external “expert” – telling them how to improve, it’s their peers as well.
  5. Students get exposed to a variety of other case studies across a spectrum of quality. The peer reviewers need to know what a “good” case study looks like in order to grade one. They learn what their next case study should look like.
  6. The marking time for 54 case studies goes down from about 10 hours (I give a lot of feedback) to about 3 hours. I don’t have to give feedback on everything because almost all of the common errors are already identified and highlighted.
  7. Students learn how I think when I’m marking their work, which helps them to make different choices for the next case study. This process allows them access to how I think about case study research in clinical practice, which means they are more likely to improve their next submission, knowing what I’m looking for.

In terms of the reliability of the peer marking and feedback, I noted the following when I reviewed the peer feedback and grades from earlier in the year:

  • 15 (28%) students’ marks went up when I compared my mark with the peer average, 7 (13%) students’ marks went up by 5% or more, and 4 (7%) students went from “Fail” to “Pass”.
  • 7 (13%) students’ marks went down, 3 (6%) by 5% or more, and 0 students went from “Pass” to “Fail”.
  • 28 (52%) students’ marks stayed the same.

The points I take from the above is that it’s really important for me to review the marks and that I have a tendency to be more lenient with marking; more students had mark increases and only 3 students’ marks went down by what I would consider a significant amount. And finally, more than half the students didn’t get a mark change at all, which is pretty good when you think about it.

 

 

How do we choose what to assess?

Assessing content (facts) for the sake of it – for the most part – is a useless activity because it tells us almost nothing about how students can use the facts to achieve meaningful objectives. On the other hand, how do you assess students’ ability to apply what they’ve learned? The first is easy (i.e. assessing content and recall), while the second is very difficult (i.e. assessing how students work with ideas). If we’re honest with ourselves, we have a tendency to assess what is easy to assess, rather than what we should assess.

You can argue that your assessment is valid i.e. that you are, in fact, assessing what you say you’re assessing. However, even if the assessment is valid, it may not be appropriate. In other words, your assessment tasks might match your learning outcomes (i.e. they are valid) but are you questioning your outcomes to make sure that they’re the right outcomes?

Are we assessing the things that matter?

Where does the path of least resistance lead?

Human beings are psychologically predisposed to do the easiest thing because thinking is hard and energy intensive. We are geared through evolution to take short cuts in our decision making and there is little that we can do to overcome this natural predisposition to take the path of least resistance (see System 1 and System 2 thinking patterns in Kahneman, 2011). The problem with learning is that the easy choice is often the least effective. In order to get students to do the hard work – overcome the resistance – we should encourage them to strive towards a higher purpose in their learning, as opposed to simply aiming for a pass. Students – and lecturers for that matter – almost always default to the path of least resistance unless they have a higher purpose that they are working towards. If we want students to achieve at high levels, then the path of least resistance must lead to failure to complete the task. Making the easy choice must lead to poorer outcomes than doing the hard work, but so often students can pass without doing the hard work. We must therefore create tasks that are very difficult to pass without doing hard cognitive work.

Kahneman, D. (2011). Thinking Fast and Slow.

Assessing teams instead of individuals

Patient outcomes are almost always influenced by how well the team works together, yet all of the disciplines conduct assessments of individual students. Yes, we might ask students who they would refer to, or who else is important in the management of the patient, but do we ever actually watch a student talk to a nurse, for example? We assess communication skills based on how they interact with the patient, but why don’t we make observations of how students communicate with other members of the team when it comes to preparing a management plan for the patient?

What would an assessment task look like if we assessed teams, rather than individuals. What if we we asked an OT, physio and SALT student to sit down and discuss the management of a patient? Imagine how much insight this would give us in terms of students’ 1) interdisciplinary knowledge, 2) teamwork, 3) communication skills, 4) complex clinical reasoning, and 5) patient-centred practice? What else could we learn in such an assessment? I propose that we would learn a lot more about power relations between the students in different disciplines. We might even get some idea of students’ levels of empathy for peers and colleagues, and not just patients.

What are the challenges to such an assessment task? There would be logistical issues around when the students would be available together, setting concurrent clinical practice exams, getting 2-3 examiners together (if the students are going to be working together, so should the examiners). What else? Maybe the examiners would realise that we have different expectations of what constitutes “good” student performance. Maybe we would realise that our curricula are not aligned i.e. that we think about communication differently? Maybe even – horror – that we’re teaching the “wrong” stuff. How would we respond to these challenges?

What would the benefits be to our curricula? How much would we learn about how we teach? We say that our students graduate with skills in communication, teamwork, conflict resolution, etc? But how do we know? With the increasing trend of institutions talking about interprofessional education, I would love to hear what they have to say about interprofessional assessment in the hospital with real patients (And no, having students from the different disciplines do a slideshow presentation on their research project doesn’t count). Or, assessment of the students working together with community members in rural areas, where we actually watch them sit down with real people and observe their interactions.

If you have any thoughts on how to go about doing something like this, please get in touch. I’d love to talk about some kind of collaborative research project.

Are we gatekeepers, or locksmiths?

David Nicholls at Critical Physiotherapy recently blogged about how we might think about access to physiotherapy education, and offers the metaphor of a gated community as one possibility.

The staff act as the guards at the gateway to the profession and the gate is a threshold across which students pass only when they have demonstrated the right to enter the community.

This got me thinking about the metaphors we use as academics, particularly those that guide how we think about our role as examiners. David’s post reminded of a conversation I had with a colleague soon after entering academia. I was working as an external clinical examiner for a local university and we were evaluating a 3rd year student who had not done very well in the clinical exam. We were talking about whether the student had demonstrated enough of an understanding of the management of the patient in order to pass. My colleague said that we shouldn’t feel bad about failing the student because “we are the gatekeepers for the profession”. The metaphor of gatekeeper didn’t feel right to me at the time and over the next few years I struggled with the idea that part of my job was to prevent students from progressing through the year levels. Don’t get me wrong, I’m not suggesting that we allow incompetent students to pass. My issue was with how we think about our roles as teachers and where the power to determine progression lies.

gatekeeper
I imagine that this is how many students think of their lecturers and clinical examiners: mysterious possessors of arcane, hidden knowledge.

A gatekeeper is someone who has power to make decisions that affect someone who does not. In this metaphor, the examiner is the gatekeeper who decides whether or not to allow a student to cross the threshold. Gate keeping is about control, and more specifically, controlling those who have less power. From the students’ perspective, the idea of examiner-as-gatekeeper moves the locus of control externally, rather than acknowledging that success is largely determined by one’s motivation. It is the difference between taking personal responsibility for not doing well, or blaming some outside factor for poor performance (“The test was too difficult; The examiner was too strict”; The patient was non-compliant”).

As long as we are the gatekeepers who control students’ progress through the degree, the locus of control exists outside of the student. They do the work and we either block them or allow them through. We have the power, not students. If they fail, it is because we failed them. It is far more powerful – and useful for learning – for students to take on the responsibility for their success or failure. To paraphrase from my PhD thesis:

If knowledge can exist in the spaces between people, objects and devices, then it exists in the relationships between them. [As lecturers, we should] encourage collaborative, rather than isolated activity, where the responsibility for learning is be shared with others in order to build trust. Facilitators must be active participants in completing the activities, while emphasising that students are partners in the process of teaching and learning, because by completing the learning activity together students are exposed to the tacit, hidden knowledge of the profession. In this way, lecturers are not authority figures who are external to the process of learning. Rather than being perceived as gatekeepers who determine progression through the degree by controlling students’ access to knowledge, lecturers can be seen as locksmiths, teaching students how to make their own keys, as and when it is necessary.

By thinking of lecturers (who are often also the examiners) as master locksmiths who teach students how to make their own keys, we are moving the locus of control back to the student. The gates that mark thresholds to higher levels of the profession still exist, as they should. It is right that students who are not ready for independent practice should be prevented from doing so. However, rather than thinking of the examiner as a gatekeeper who prevents the student from crossing the threshold, we could rather think of the student as being unable to make the right key. The examiner is then simply an observer who recognises the student’s inability to open the gate. It is the student who is responsible for poor performance and not the examiner who is responsible for failing the student.

I therefore suggest that the gatekeeper metaphor for examiners be replaced with that of a locksmith, where students are regarded as apprentices and novice practitioners who are learning a craft. From this perspective we can more carefully appreciate the interaction that is necessary in the teaching and learning relationship, as we guide students towards learning how to make their own keys as they control their own fate.

mwI4tl4

Caveat: if we are part of a master-apprentice relationship with students, then their failure must be seen as our failure too. If my student cannot successfully create the right key to get through the gate, I must faithfully interrogate my role in that failure, and I wonder how many of us would be comfortable with that.

Thanks to David, who posted Physiotherapy Education as a Gated Community, and for stimulating me to think more carefully about how the metaphors we use inform our thinking and our practice.