The first In Beta “Experiments in Physiotherapy Education” unconference

The In Beta project may seem to have been quiet for the last few months but the fact is we’ve been busy organising a two-day In Beta unconference that will take place on 14-15 May 2019 at HESAV in Lausanne, Switzerland. If you’re planning on going to the WCPT conference (10-13 May) and have an interest in physiotherapy education, you may want to look into the option of joining us for another two days of discussion and engagement, albeit in a more relaxed, less academic format.

Attendance is free, although you will need to make your own arrangements for travel and accommodation. For more information check out the unconference website and register here.

We are incredibly grateful to Haute Ecole de Santé Vaud for hosting the unconference and providing venues for us over the two days.

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ

Seminar on Inter-professional Education (IPE)

A few days ago I attended a lunchtime seminar on the value and impact of Interprofessional in health sciences education, presented by Professor Hugh Barr. I unfortunately couldn’t stay for the duration of the discussion, but I took a few notes while I was there.

“Interprofessional education (IPE) is sophisticated”. I like this because it seems that we sometimes take the stance that IPE is about putting students from different disciplines in the same room and telling them to learn about each other. It became clear during the discussion just how complex IPE is.

What opportunities exist for curriculum development in the context of IPE? What are the conversations that are happening in the classrooms around interprofessional collaboration? How can those experiences be leveraged by students and educators?

View from Sir Lowries Pass on the way to supervise students on clinical placement in Grabouw.

We place groups of 3rd year students in a rural community about an hour outside of Cape Town, and part of that clinical rotation is to try and collaborate with students from other domains. The effort is overseen (in theory) by the Interdisciplinary Teaching and Learning Unit, although in practice there are many challenges. The biggest problem, at least as reported by students, is a lack of shared objectives between the groups. Even though they have time allocated during the week in which to work together on shared projects, the individual programmes from the various departments have little in the way of real overlap. This often leads to frustration and a high attrition rate of departments dropping out of the collaborative part of the exercise.

In terms of showcasing examples of collaborative work, which ones aren’t too expensive or challenging, which have good outcomes and can serve to promote the approach i.e. what is the low-hanging fruit?

“small is beautiful”

One of the benefits of IPE is the idea that complex social and health problems in communities are beyond the capacity of any one profession to solve.

Formal publication in peer-reviewed journals isn’t the only set of outcomes to aim for. Interesting and relevant information that isn’t grounded in evidence and theory should also be shared. I liked the emphasis that Professor Barr placed on informal dissemination of information by alternative means.

On the question of how to break the dominance of medics in driving health strategy, Professor Barr suggested developing collaborative approaches while trying to integrate the medics, not alienating them and, if that failed, to move forward without them. We have at least one situation though, where medical students are driving the process the IPE in a rural community that our students are placed in. There are plenty of examples where the medics are not only willing to participate but are actually leading the way.

“Research what you teach. Teach what you research” – Professor Renfrew Christie, Dean of Research

We need to acknowledge and understand that IPE in undergraduate education is only a first step towards real collaborative practice in health systems. It’s too much to expect that after a month or two of spending time together, our students will simply know how to develop shared objectives and interventions with other professions.

Students’ languages and their associations

Our Directorate of Teaching and Learning has organised a series of seminars over the next few months, with invited speakers from a variety of institutions across the country. They’ll be presenting on a range of topics, including academic literacy, integrating technology into teaching, working with large classes, teaching practices, and educational theory. I’ll also be presenting a session on personal learning, which will be similar to the other talks I’ve give on the topic recently.

Today we had a presentation by Doctor Brenda Leibowitz, who spoke about the relationship between language and biography / identity and their impact on teaching and learning. Here are a few short notes I took during the session.

Language studies typically look at homogeneous groups, but few look at cross-institutional and cultural communities.

Language can be intimidating for students (“the words are so complicated”), which means that texts can take longer to read, result in more guessing and reduced coherence

“Too hard to find the words, so you just make simple sentences”

Students appreciated the focus groups where someone was paying attention to their difficulties (“This gathering is like rain in the desert”)

The ability to communicate effectively depends on genre. Context has implications for language

Attitude has implications for language, as does identity

Mastery of a second language is important, but is not the sole determinant of academic success

Role of language in teaching and learning:

  • Proficiency
  • Social – and isolation
  • Utility
  • Value (exposure)
  • Ideological associations

Language has an impact on social and organisational structure

Code switching

How can we introduce students to the genre of academic discourse?

Talking and writing students into the discipline”. How do you take your students with you to the conclusion, rather than leave them behind and create a gap that they cannot cross?

Assessment in an outcomes based curriculum

I attended a seminar / short course on campus yesterday, presented by Prof. Chrissie Boughey from Rhodes University. She spoke about the role of assessment in curriculum development and the link between teaching and assessing. Here are the notes I took.

Assessment is the most important factor in improving learning because we get back what we test. Therefore assessment is acknowledged as a driver of the quality of learning.

Currently, most assessment tasks encourage the reproduction of content, whereas we should rather be looking for the production of new knowledge (the analyse, evaluate and create parts of Bloom’s top level cognitive processes).

Practical exercise: Pick a course / module / subject you currently teach (Professional Ethics for Physiotherapists), think about how you assess it (Assignment, Test, Self-study, Guided reflection, Written exam) and finally, what you think you’re assessing (Critical thinking / Analysis around ethical dilemmas in healthcare, Application of theory to clinical practice). I went on to identify the following problems with assessment in the current module:

  • I have difficulty assigning a quantitative grade to what is generally a qualitative concept
  • There is little scope in the current assessment structure for a creative approach

This led to a discussion about formal university structures that determine things like, how subjects will be assessed, as well as the regimes of teaching and learning (“we do it this way because this is the way it’s always been done”). Do they remove your autonomy? It made me wonder what our university official assessment policy is.

Construct validity: Are we using assessment to asses something other than what we say we’re assessing? If so, what are we actually assessing?

There was also a question about whether or not we could / should asses only what’s been formally covered in class. How do you / should you asses knowledge that is self-taught? We could for example, measure the process of learning, rather than the product. I made a point that in certain areas of what I teach, I no longer assign a grade to an individual peice of work and rather give a mark for the progress that the student has made, based on feedback and group discussion in that area.

Outcomes based assessment / criterion referenced assessment

  1. Uses the principle of ALIGNMENT (aligning learning outcomes, passing criteria, assessment)
  2. Is assessing what students should be able to do
  3. “Design down” is possible when you have standardised exit level outcomes (we do, prescribed by the HPCSA)
  4. The actual criteria are able to be observed and are not a guess at a mental process, “this is what I need to see in order to know that the student can do it”
  5. Choosing the assessment tasks answers the question “How will I provide opportunities for students to demonstrate what I need to see?” When this is the starting point, it knocks everything else out of alignment
  6. You need space for students / teachers to engage with the course content and to negotiate meaning or understanding of the course requirements, “Where can they demonstrate competence?”

Criteria are negotiable and form the basis of assessment. They should be public, which makes educators accountable.

When designing outcomes, the process should be fluid and dynamic.

Had an interesting conversation about the priviliged place of writing in assessment. What about other expressions of competence? Since speech is the primary form of communication (we learn to speak before we learn to write), we find it easier to convey ideas through conversation, as it includes other cues that we use to construct meaning. Writing is a more difficult form because we lack visual (and other) cues. Drafting is one way that constructing meaning through writing could be made easier. The other point I thought was interesting was that academic writing is communal (drafting, editors, reviewers all provide a feedback mechanism that isn’t as fluid as speech, but is helpful nonetheless), but we often don’t allow students to write communally.

Outcomes based assessment focusses on providing students with multiple opportunities to practice what they need to do, and the provision of feedback on that practice (formative). Eventually, students must demonstrate achievement (summative).

We should only assign marks when we evaluate performace against the course outcomes.

Finally, in thinking about the written exam as a form of assessment, we identified these characteristics:

  • It is isolated and individual
  • There is a time constraint
  • There is pressure to pass or fail

None of these characteristics are present in general physiotherapy practice. We can always ask a colleage / go to the literature for assistance. There is no constraint to have the patient fully rehabilitated by any set time, and there are no pass or fail criteria.

If assessment is a method we use to determine competence to perform a given task, and the way we asses isn’t related to the task physio students will one day perform, are we assessing them appropriately?

Note: the practical outcomes of this session will include the following:

  • Changing the final assessment of the Ethics module from a written exam to a portfolio presentation
  • Rewriting the learning outcomes of the module descriptors at this year’s planning meeting
  • Evaluating the criteria I use to mark my assignments to better reflect the module outcomes

Mozilla Open Education course – Overview

We had our first session of the Mozilla Open Education Course earlier this evening and it was pretty interesting.  There were a few technical issues with sound but generally it was very well done.  Thanks to everyone who made it possible.

Here’s a few notes that I took during the session.  I know the video will be available later but I took notes anyway and listed the comments from the presenter as it was happening, so there may be errors.  If I’ve made any mistakes, please let me know.

Mark Surman (from the Mozilla foundation)
Spoke about why Mozilla is involved and what the foundation’s motivations are.

Why do the course?

Students are living and learning on the web.  Education is not working and the web is making this even clearer.

Educators need to teach like the web, using these building blocks:

  • (open) content
  • (open) tech
  • (open) pedagogy

This course is about using these building blocks…all 3 need to come together in order for open education to work.

Why do Mozilla and CC care?
To promote openness, participation and distributed decision-making as a core part of internet life.  Education is critical to this.

Also, an experiment to:

  • share skills
  • new ideas
  • more allies
  • …have fun

Frank Hecker (Mozilla Foundation)
Elaborated on previous presentation

  • Teach people about Mozilla
  • Create learning opportunities around Mozilla technology and practices
  • Bring new people into the Mozilla camp
  • Create a global community of Mozilla educators
  • Mozilla curriculum at Seneca college
  • Incorporate Mozilla-related topics into coursework
  • http://education.mozilla.org – repo for course materials created
  • People learn things best when participating directly in the communities around that project
  • education@lists.mozilla.org

Question: will we be able to make our own ff addon?  Yes

Ahrash Bissell (ccLearn)

Why is Creative Commons involved in learning?

It’s mission is to minimise the legal, technological and social barriers to sharing and reusing educational materials.

Focusses on ways to improve opportunities for and education:

  • Teach about OER
  • Solve problems (built the “discover” tool for OER)
  • Build and diversify community (education is traditionally subdivided into camps e.g. university, high school).  Open education transcends these boundaries. Boundaries useful but should be permeable.
  • Explore better pedagogical models (learning is not something that happens in a delimited way, ideally it should be enjoyed and embraced all the time.  Models haven’t penetrated, everything the same way for the last 50 years (deeply entrenched)
  • Empower teachers and learners (certain expectations of students / teachers, “this is what it means to teach/learn”.  Little power to engage as “scientists” in teaching / learning and make adjustments.  Open source development models – emphasisise feedback, creating a system that allows experimentation in an open, transparent, participatory way.

Embrace overarching principle for engaged padagogies, not new but has become inevitable.

Crucial considerations:

  • Constant, formative feedback (must want to be assessed)
  • Education for skills and capacities, not rote knowledge (the internet makes it obvious why this is the way to go, “knowledge” is already everywhere, thinking is more important.  “Skilled learners”.
  • Leverage human and material capital effectively (reaching into peer groups)
  • Consider the bulding blocks of a participatory learning system
  • Enjoy learning

Philip Schmidt (Peer 2 Peer University)
Provided an overview of the project / sessions

Background readings available on course wiki / 20 min. interviews

Draw up a blueprint for individual / group projects:

  • (open) technology platform
  • (open) licensing
  • (open) pedagogical approach

Idea – blueprint – prototype – project!
Good idea to feed into ongoing things, like:

  • Mozilla education portal
  • Firefox plugins
  • P2PU

Next steps:

  • Decide on groups
  • Start sketching
  • Ideas more important than detail
  • A picture
  • Enough detail to start building