Challenging students’ conceptual relationships in clinical education

I just wanted to share a thought while preparing our case notes for the Applied Physiotherapy module we’re developing. One of the designers made a note of the “guideline answers” for facilitators to some of the questions that we might use to trigger students’ thinking. I wrote the following as a comment and didn’t want to lose it when the document is finalised, so I’m putting it here.

“I think we should make sure that, in addition to the ‘answers’, we should identify the main concepts we want students to understand. Remember that we’re using our paper patient (i.e. the case) as a framework for students to learn about concepts. Then, they apply those concepts in the real world to patients. They reflect on those real-world interactions and identify dissonance between their experienced reality (the patient contact) and their abstract conceptions of reality (how they originally conceived of the patient contact). After the patient contact, they feed back to their small groups and facilitators, who together help students create new relationships between concepts. So, in short, the clinical concepts are learned initially through the paper patient, tested in the real world with an actual patient, discussed online (maybe) and then brought back to the classroom for further reflection and refinement. The next week they are exposed to new concepts that build on their previous experiences, and then they get to test those abstractions in the real world again.”

I’m trying to take an intentional approach to using Laurillard’s conception of academic learning that I’m exploring in “Rethinking University Teaching”

Teaching and learning workshop at Mont Fleur

Photo taken while on a short walk during the retreat.

A few weeks ago I spent 3 days at Mont Fleur near Stellenbosch, on a teaching and learning retreat. Next year we’re going to be restructuring 2 of our modules as part of a curriculum review, and I’ll be studying the process as part of my PhD. That part of the project will also form a case study for an NRF-funded, inter-institutional study on the use of emerging technologies in South African higher education.

I used the workshop as an opportunity to develop some of the ideas for how the module will change (more on that in another post), and these are the notes I took during the workshop. Most of what I was writing was specific to the module I was working with, so these notes are the more generic ones that might be useful for others.

————————

Content determines what we teach, but not how we teach. But it should be the outcomes that determine the content?

“Planning” for learning

Teaching is intended to make learning possible / there is an intended relationship between teaching and learning

Learning = a recombination of old and new material in order to create personal meaning. Students bring their own experience from the world that we can use to create a scaffold upon which to add new knowledge

We teach what we usually believe is important for them to know

What (and how) we teach is often constrained by external factors:

  • Amount of content
  • Time in which to cover the content (this is not the same as “creating personal meaning”)

We think of content as a series of discrete chunks of an unspecified whole, without much thought given to the relative importance of each topic as it relates to other topics, or about the nature of the relationships between topics

How do we make choices between what to include and exclude?

  • Focus on knowledge structuring
  • What are the key concepts that are at the heart of the module?
  • What are the relationships between the concepts?
  • This marks a shift from dis-embedded facts to inter-related concepts
  • This is how we organise knowledge in the discipline

Task: map the knowledge structure of your module

“Organising knowledge” in the classroom is problematic because knowledge isn’t organised in our brains in the same way that we organise it for students / on a piece of paper. We assign content to discrete categories to make it easier for students to understand / add it to their pre-existing scaffolds, but that’s not how it exists in minds.

Scientific method (our students do a basic physics course in which this method is emphasised, yet they don’t transfer this knowledge to patient assessment):

  1. Observe something
  2. Construct an hypothesis
  3. Test the hypothesis
  4. Is the outcome new knowledge / expected?

Task: create a teaching activity (try to do something different) that is aligned with a major concept in the module, and also includes graduate attributes and learning outcomes. Can I do the poetry concept? What about gaming? Learners are in control of the environment, mastering the task is a symbol of valued status within the group, a game is a demarcated learning activity with set tasks that the learner has to master in order to proceed, feedback is built in, games can be time and resource constrained

The activity should include the following points:

  • Align assessment with outcomes and teaching and learning activities (SOLO taxonomy – Structured Observation of Learning Outcomes)
  • Select a range of assessment tools
  • Justify the choice of these tools
  • Explain and defend marks and weightings
  • Meet the criteria for reliability and validity
  • Create appropriate rubrics

Assessment must be aligned with learning outcomes and modular content. It provides students with opportunities to show that they can do what is expected of them. Assessment currently highlights what students don’t know, rather than emphasising what they can do, and looking for ways to build on that strength to fill in the gaps.

Learning is about what the student does, not what the teacher does.

How do you create observable outcomes?

The activity / doing of the activity is important

As a teacher:

  • What type of feedback do you give?
  • When do you give it?
  • What happens to it?
  • Does it lead to improved learning?

Graduate attributes ↔ Learning outcomes ↔ Assessment criteria ↔ T&L activities ↔ Assessment tasks ↔ Assessment strategy

Assessment defines what students regard as important, how they spend their time and how they come to see themselves as individuals (Brown, 2001; in Irons, 2008: 11)

Self-assessment is potentially useful, although it should be low-stakes

Use a range of well-designed assessment tasks to address all of the Intended Learning Outcomes (ILOs) for your module. This will help to provide evidence to teachers of the students competence / understanding

In general quantitative assessment uses marks while qualitative assessment uses rubrics

Checklist for a rubric:

  • Do the categories reflect the major learning objectives?
  • Are there distinct levels which are assigned names and mark values?
  • Are the descriptions clear? Are they on a continuum and allow for student growth?
  • Is the language clear and easy for students to understand?
  • Is it easy for the teacher to use?
  • Can the rubric be used to evaluate the work? Can it be used for assessing needs? Can students easily identify growth areas needed?

Evaluation:

  • What were you evaluating and why?
  • When was the evaluation conducted?
  • What was positive / negative about the evaluation?
  • What changes did you make as a result of the feedback you received?

Evaluation is an objective process in which data is collected, collated and analysed to produce information or judgements on which decisions for practice change can be based

Course evaluation can be:

  • Teacher focused – for improvement of teaching practice
  • Learner focused – determine whether the course outcomes were achieved

Evaluation be conducted at any time, depending on the purpose:

  • At the beginning to establish prior knowledge (diagnostic)
  • In the middle to check understanding (formative) e.g. think-pair-share, clickers, minute paper, blogs, reflective writing
  • At the end to determine the effectiveness of the course / to determine whether outcomes have been achieved (summative) e.g. questionnaires, interviews, debriefing sessions, tests

Obtaining information:

  • Feedback from students
  • Peer review of teaching
  • Self-evaluation

References

  • Knight (n.d.). A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment
  • Morgan (2008). The Course Improvement Flowchart: A description of a tool and process for the evaluation of university teaching