assessment physiotherapy

Understanding vs knowing

Final exams vs. projects – nope, false dichotomy: a practical start to the blog year (by Grant Wiggins)

Students who know can:

  • Recall facts
  • Repeat what they’ve been told
  • Perform skills as practiced
  • Plug in missing information
  • Recognize or identify something that they’ve been shown before

Whereas students who understand can:

  • Justify a claim
  • Connect discrete facts on their own
  • Apply their learning in new contexts
  • Adapt to new circumstances, purposes or audiences
  • Criticize arguments made by others
  • Explain how and why something is the case

IF understanding is our aim, THEN the majority of the assessments (or the weighting of questions in one big assessment) must reflect one or more of the phrases above.

In the Applied Physiotherapy module that we teach using a case-based learning approach, we’re trying to structure our feedback to students in terms that help them to construct their work in ways that explicitly address the items listed above. We use Google Drive to give feedback to students as they develop their own notes, and try to ensure that the students are expressing their understanding by creating relationships between concepts.

One of the major challenges has been to shift mindsets (both students’ and facilitators’) away from the idea that knowing facts is the same as understanding. As much as we try to emphasise that one can know many facts and still not understand, it’s still clear that this distinction does not come easily to everyone. Both students and some colleagues believe that knowing as many facts as possible is the key to being a strong practitioner, even though the evidence shows that decontextualised knowledge is not helpful in practice situations.

The list above, describing what students understanding “looks like”, is helpful in getting our facilitators and students who struggle with the shift in thinking, to better grasp what we’re aiming for.


If you can’t explain it simply…

Sometimes I get frustrated with colleagues who seem to think that the more complicated they can make an idea sound, the more intelligent they must be. I can’t think of another reason why they would obfuscate what they’re trying to say. I spend a lot of time trying to simplify what I’m talking about, although I don’t always manage to get this right. It’s not that I think my audience isn’t able to deal with complex ideas, I just think that I should be able to share complicated ideas simply.

education learning physiotherapy teaching

Ranking students, or developing understanding?

We have a collection of courses at my institution that have become known as “killer courses”. These are the courses with a history of poor student performance in terms of throughput and retention, and which we’re trying to provide extra support for. Two of these killer courses are outside courses (i.e. outside of our department) but which are nonetheless requirements for our students to pass. Traditionally, it’s a struggle for many students to get through these modules and we’re still not sure why. The institution is investigating the issue and one of the suggestions has been to provide tutorials, for which additional funding has been provided. The problem is that students don’t attend the tutorials. Either they don’t see the value or don’t believe that they need the extra assistance. Whatever the reason is, our students (and students from other departments who are required to pass the courses) don’t attend the tutorials.

I believe that the reason the students don’t attend is because the tutorials aren’t graded, and I believe that that is part of the problem. Don’t get me wrong, I’m not suggesting that I think the tutorials should be graded. On the contrary, I think we need a system with fewer graded modules. The problem is that students assign value to material that is assessed through being marked, with us as the all-knowing teachers with all the right answers. As clinical educators, we have developed (or are at least part of) a system in which assigning grades is the one way that we tell students what we think is important. From the students’ perspective, if it’s not for marks, then it can’t be important.

I spoke to our 3rd year class a few months ago and suggested that they work towards a deeper understanding of material, rather than working towards getting an increase in marks. This was greeted with confusion by the students. At the end of the day they pass or fail based on their ability to accumulate marks (by the way, the 50% pass mark, or cut score, is largely arbitrary and irrelevant). They told me that we rank them by grade, rather than capability, so how can I say that understanding is important, when we place all the value on marks. I tried to argue that an emphasis on understanding will lead to higher marks, but then I realised that we don’t evaluate for understanding (there are many reasons why teaching and evaluating understanding is Hard). Many of the assessments of these students are about their ability to memorise content and patterns of movement, which is easier for them to do than to really understand the concepts. In order for us to push the understanding agenda, we will need to change how we teach and how we assess.

Once these students graduate they will never again be assessed on their professional competency with marks. This is one of the great disconnects between physiotherapy education and real-world physiotherapy practice. In the educational context, we rank students by grades based on their ability to reproduce the things that we say are important. In the real-world, there’s no-one telling them what is important, no ranking, no grades, no tests, etc. The only indicator that has value in your profession is whether or not your patients’ quality of life goes up as a result of your intervention.

I don’t think that the solution to getting students to attend tutorials is in making them mandatory, or to grade them. I think we need to help students shift the emphasis of their studies from scoring higher marks,  towards actually understanding the concepts and ideas we’re working on. We really need to emphasise that higher marks doesn’t necessarily mean better understanding. But, in order to do that, we need to shift our teaching culture from placing such a high value on marks, and move towards emphasising the importance of deeper understanding. And not making the mistake of thinking that one is the same as the other.

learning PhD physiotherapy research teaching

From “designing teaching” to “evaluating learning”

Later this month we’ll be implementing a blended approach to teaching and learning in one module in our physiotherapy department. This was to form the main part of my research project, looking at the use of technology enhanced teaching and learning in clinical education. The idea was that I’d look at the process of developing and implementing a blended teaching strategy that integrated an online component, and which would be based on a series of smaller research projects I’ve been working on.

I was quite happy with this until I had a conversation with a colleague, who asked how I planned on determining whether or not the new teaching strategy had actually worked. This threw me a little bit. I thought that I had it figured out…do small research projects to develop understanding of the students and the teaching / learning environment, use those results to inform the development of an intervention, implement the intervention and evaluate the process. Simple, right?

Then why haven’t I been able to shake the feeling that something was missing? I thought that I’d use a combination of outputs or “products of learning” (e.g. student reflective diaries, concept mapping assignments, semi-structured interviews, test results, focus groups, etc.) to evaluate my process and make a recommendation about whether others should consider taking a blended approach to clinical education. I’ve since begun to wonder if that method goes far enough in making a contribution to the field, and if there isn’t something more that I should be doing (my supervisor is convinced that I’ve got enough without having to change my plan at this late stage, and she may be right).

However, when I finally got around to reading Laurillard’s “Rethinking University Teaching”, I was quite taken with her suggested approach. It’s been quite an eye opener, not only in terms of articulating some of the problems that I see in clinical practice with our students, but also helping me to realize the difference between designing teaching activities (which is what I’ve been concentrating on), and evaluating learning (which I’ve ignored because this is hard to do). I also realized that, contrary to a good scientific approach, I didn’t have a working hypothesis, and was essentially just going to describe something without any idea of what would happen. Incidentally, there’s nothing wrong with descriptive research to evaluate a process, but if I can’t also describe the change in learning, isn’t that limiting the study?

I’m now wondering if, in addition to what I’d already planned, I need to conduct interviews with students using the phenomenological approach suggested by Laurillard i.e. the Conversational Framework. I don’t yet have a great understanding of it but I’m starting to see how merely aligning a curriculum can’t in itself make any assertions about changes in student learning. I need to be able to say that a blended approach does / does not appear to fundamentally change how students’ construct meaning and in order to do so I’m thinking of doing the following:

  • Interview 2nd year and 3rd students at the very beginning of the module (January, 2012), before they’ve been introduced to case-based learning. My hypothesis is that they’ll display quite superficial mental constructs in terms of their clinical problem-solving ability as neither group has had much experience with patient contact
  • Interview both groups again in 6 months and evaluate whether or not there constructs have changed. At this point, the 2nd years will have been through 6 months of a blended approach, while the 3rd years will have had one full term of clinical contact with patients. My hypothesis is that the 2nd years will be better able to reason their way through problems, even though the 3rd years will have had more time on clinical rotation

I hope that this will allow me to make a stronger statement about the impact of a blended approach to teaching and learning in clinical education, and to be able to demonstrate that it fundamentally changes students constructs from superficial to deep understanding. I’m just not sure if the Conversational Framework is the most appropriate model to evaluate students’ problem-solving ability, as it was initially designed to evaluate multimedia tools.