From “designing teaching” to “evaluating learning”

Later this month we’ll be implementing a blended approach to teaching and learning in one module in our physiotherapy department. This was to form the main part of my research project, looking at the use of technology enhanced teaching and learning in clinical education. The idea was that I’d look at the process of developing and implementing a blended teaching strategy that integrated an online component, and which would be based on a series of smaller research projects I’ve been working on.

I was quite happy with this until I had a conversation with a colleague, who asked how I planned on determining whether or not the new teaching strategy had actually worked. This threw me a little bit. I thought that I had it figured out…do small research projects to develop understanding of the students and the teaching / learning environment, use those results to inform the development of an intervention, implement the intervention and evaluate the process. Simple, right?

Then why haven’t I been able to shake the feeling that something was missing? I thought that I’d use a combination of outputs or “products of learning” (e.g. student reflective diaries, concept mapping assignments, semi-structured interviews, test results, focus groups, etc.) to evaluate my process and make a recommendation about whether others should consider taking a blended approach to clinical education. I’ve since begun to wonder if that method goes far enough in making a contribution to the field, and if there isn’t something more that I should be doing (my supervisor is convinced that I’ve got enough without having to change my plan at this late stage, and she may be right).

However, when I finally got around to reading Laurillard’s “Rethinking University Teaching”, I was quite taken with her suggested approach. It’s been quite an eye opener, not only in terms of articulating some of the problems that I see in clinical practice with our students, but also helping me to realize the difference between designing teaching activities (which is what I’ve been concentrating on), and evaluating learning (which I’ve ignored because this is hard to do). I also realized that, contrary to a good scientific approach, I didn’t have a working hypothesis, and was essentially just going to describe something without any idea of what would happen. Incidentally, there’s nothing wrong with descriptive research to evaluate a process, but if I can’t also describe the change in learning, isn’t that limiting the study?

I’m now wondering if, in addition to what I’d already planned, I need to conduct interviews with students using the phenomenological approach suggested by Laurillard i.e. the Conversational Framework. I don’t yet have a great understanding of it but I’m starting to see how merely aligning a curriculum can’t in itself make any assertions about changes in student learning. I need to be able to say that a blended approach does / does not appear to fundamentally change how students’ construct meaning and in order to do so I’m thinking of doing the following:

  • Interview 2nd year and 3rd students at the very beginning of the module (January, 2012), before they’ve been introduced to case-based learning. My hypothesis is that they’ll display quite superficial mental constructs in terms of their clinical problem-solving ability as neither group has had much experience with patient contact
  • Interview both groups again in 6 months and evaluate whether or not there constructs have changed. At this point, the 2nd years will have been through 6 months of a blended approach, while the 3rd years will have had one full term of clinical contact with patients. My hypothesis is that the 2nd years will be better able to reason their way through problems, even though the 3rd years will have had more time on clinical rotation

I hope that this will allow me to make a stronger statement about the impact of a blended approach to teaching and learning in clinical education, and to be able to demonstrate that it fundamentally changes students constructs from superficial to deep understanding. I’m just not sure if the Conversational Framework is the most appropriate model to evaluate students’ problem-solving ability, as it was initially designed to evaluate multimedia tools.