The first In Beta “Experiments in Physiotherapy Education” unconference

The In Beta project may seem to have been quiet for the last few months but the fact is we’ve been busy organising a two-day In Beta unconference that will take place on 14-15 May 2019 at HESAV in Lausanne, Switzerland. If you’re planning on going to the WCPT conference (10-13 May) and have an interest in physiotherapy education, you may want to look into the option of joining us for another two days of discussion and engagement, albeit in a more relaxed, less academic format.

Attendance is free, although you will need to make your own arrangements for travel and accommodation. For more information check out the unconference website and register here.

We are incredibly grateful to Haute Ecole de Santé Vaud for hosting the unconference and providing venues for us over the two days.

Postgraduate student supervision workshop

phd072613s

A few weeks ago I attended a seminar on postgraduate supervision, presented by some of the more experienced research supervisors on campus. While I couldn’t stay for the full session, I did manage to see two of the presenters. Here are the notes I took.

Introduction (Prof. Ramesh Bharuthram)

All academics will be required to complete a 6 month course in teaching and learning from next year onwards.

As academics we need to have a profile that demonstrates leadership in a niche area that inspires confidence in students who are interested in postgraduate studies. We need to develop a track record in student supervision and publication that draws students to want to study and develop under your supervision.

You need to lead from the front.

The student-supervisor relationship is embedded within the structure of the institution and is beyond the personal relationship between people.

The role of the supervisor for successful postgraduate research supervision (Prof. Meshach Ogunnyi)

They may come to the supervisor thinking that they have an idea of what they want to study, but have not yet read widely and deeply enough. They need to begin with literature and have a deep understanding of what they are looking at. They need to be embedded in the discourse of the discipline, which can be achieved through reading the literature and discussion.

We need to think of the relationship on human terms. They should feel comfortable in your home, spending time with your children, cooking together. However, the boundaries of the relationship should ensure that familiarity does not breed contempt.

We also need to be able to counsel students who are facing crises in their lives. When students have real problems in their lives, it is unlikely that their research will make much progress. We need to understand that students are embedded within families who also need their time and input.

Students confidence and sense of self is often lacking and they need to be nurtured and encouraged when they are down and struggling. You must be strong so that you can drive the process, but flexible to be able to adapt when students will not respond positively.

Being able to write well is absolutely essential. The ability to present ideas simply and well is a foundation of doing good research. Academic, as well as English, literacy must be developed and guidance needs to take this into account. Make use of the Writing Centre to help students develop their language skills.

The ideal student-supervisor relationship for successful postgraduate research supervision (Prof. Tammy Shefer)

Important to note there is no “ideal” relationship. Every relationship with a student will be different. Some students need more / less structure, no “one-size-fits-all” approach. Discuss expectations with students and find ways to work together e.g. decide how to communicate, when to meet, etc.

Essential for supervisors to constantly be reflecting on their process.

The goal of supervision is not only a successful thesis and graduation, it is also about induction into scholarly communities of practice and the formation of identity and practices.

There is an assumption that undergraduate students will be able to transform themselves independently into researchers without careful guidance.

Components of the relationship:

  • Student-centred: student as active agent in their own learning (the facilitation and development of empowerment and agency in the student)
  • PG student inducted into scholarly and research identity
  • Transition student-teacher relationship into a peer relationship
  • Complicated negotiation of roles and responsibilities

There needs to be an understanding and appreciation of the student context and their individual challenges, as well as being sensitive to the power relationship where the bulk of the power rests with the supervisor.

Students need to have an understanding of the limitations of the supervisor, especially in terms of when they can expect to receive feedback.

Set clear boundaries with students and ensure that they know what your limitations are.

Maintain regular contact with them even if only an email to catch up. Ensure that the process is transparent.

Consider joint / group supervision where students can assist and guide each other. Breaks the isolation and provides a sense of community where students can share and discuss ideas and challenges. Helps them learn new skills e.g. presentation and argument. However, this requires that students are at least working in similar areas.

Try to draw students into existing collaborative research projects, as it can be a valuable learning framework for PG research.

Ensure that you understand the process so that you can help students understand how their work will move through the system, from registration to final submission. It can be difficult for the relationship if students miss deadlines that the supervisor should have been aware of.

Do not assume that the student is disembodied. They have aspirations and dreams and full lives that they’re involved in.

Be careful of editing your students work. You run the risk of changing their ideas as well as their words. Also, they need to actively engage with the process and find their own voice.

They also need to make their own choices about how best to present themselves and when we change their work from a position of authority, they may not challenge it.

Supervision is a journey and not a task. It’s not just about “going the distance”.

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ

Twitter Weekly Updates for 2012-03-26

Teaching and learning workshop at Mont Fleur

Photo taken while on a short walk during the retreat.

A few weeks ago I spent 3 days at Mont Fleur near Stellenbosch, on a teaching and learning retreat. Next year we’re going to be restructuring 2 of our modules as part of a curriculum review, and I’ll be studying the process as part of my PhD. That part of the project will also form a case study for an NRF-funded, inter-institutional study on the use of emerging technologies in South African higher education.

I used the workshop as an opportunity to develop some of the ideas for how the module will change (more on that in another post), and these are the notes I took during the workshop. Most of what I was writing was specific to the module I was working with, so these notes are the more generic ones that might be useful for others.

————————

Content determines what we teach, but not how we teach. But it should be the outcomes that determine the content?

“Planning” for learning

Teaching is intended to make learning possible / there is an intended relationship between teaching and learning

Learning = a recombination of old and new material in order to create personal meaning. Students bring their own experience from the world that we can use to create a scaffold upon which to add new knowledge

We teach what we usually believe is important for them to know

What (and how) we teach is often constrained by external factors:

  • Amount of content
  • Time in which to cover the content (this is not the same as “creating personal meaning”)

We think of content as a series of discrete chunks of an unspecified whole, without much thought given to the relative importance of each topic as it relates to other topics, or about the nature of the relationships between topics

How do we make choices between what to include and exclude?

  • Focus on knowledge structuring
  • What are the key concepts that are at the heart of the module?
  • What are the relationships between the concepts?
  • This marks a shift from dis-embedded facts to inter-related concepts
  • This is how we organise knowledge in the discipline

Task: map the knowledge structure of your module

“Organising knowledge” in the classroom is problematic because knowledge isn’t organised in our brains in the same way that we organise it for students / on a piece of paper. We assign content to discrete categories to make it easier for students to understand / add it to their pre-existing scaffolds, but that’s not how it exists in minds.

Scientific method (our students do a basic physics course in which this method is emphasised, yet they don’t transfer this knowledge to patient assessment):

  1. Observe something
  2. Construct an hypothesis
  3. Test the hypothesis
  4. Is the outcome new knowledge / expected?

Task: create a teaching activity (try to do something different) that is aligned with a major concept in the module, and also includes graduate attributes and learning outcomes. Can I do the poetry concept? What about gaming? Learners are in control of the environment, mastering the task is a symbol of valued status within the group, a game is a demarcated learning activity with set tasks that the learner has to master in order to proceed, feedback is built in, games can be time and resource constrained

The activity should include the following points:

  • Align assessment with outcomes and teaching and learning activities (SOLO taxonomy – Structured Observation of Learning Outcomes)
  • Select a range of assessment tools
  • Justify the choice of these tools
  • Explain and defend marks and weightings
  • Meet the criteria for reliability and validity
  • Create appropriate rubrics

Assessment must be aligned with learning outcomes and modular content. It provides students with opportunities to show that they can do what is expected of them. Assessment currently highlights what students don’t know, rather than emphasising what they can do, and looking for ways to build on that strength to fill in the gaps.

Learning is about what the student does, not what the teacher does.

How do you create observable outcomes?

The activity / doing of the activity is important

As a teacher:

  • What type of feedback do you give?
  • When do you give it?
  • What happens to it?
  • Does it lead to improved learning?

Graduate attributes ↔ Learning outcomes ↔ Assessment criteria ↔ T&L activities ↔ Assessment tasks ↔ Assessment strategy

Assessment defines what students regard as important, how they spend their time and how they come to see themselves as individuals (Brown, 2001; in Irons, 2008: 11)

Self-assessment is potentially useful, although it should be low-stakes

Use a range of well-designed assessment tasks to address all of the Intended Learning Outcomes (ILOs) for your module. This will help to provide evidence to teachers of the students competence / understanding

In general quantitative assessment uses marks while qualitative assessment uses rubrics

Checklist for a rubric:

  • Do the categories reflect the major learning objectives?
  • Are there distinct levels which are assigned names and mark values?
  • Are the descriptions clear? Are they on a continuum and allow for student growth?
  • Is the language clear and easy for students to understand?
  • Is it easy for the teacher to use?
  • Can the rubric be used to evaluate the work? Can it be used for assessing needs? Can students easily identify growth areas needed?

Evaluation:

  • What were you evaluating and why?
  • When was the evaluation conducted?
  • What was positive / negative about the evaluation?
  • What changes did you make as a result of the feedback you received?

Evaluation is an objective process in which data is collected, collated and analysed to produce information or judgements on which decisions for practice change can be based

Course evaluation can be:

  • Teacher focused – for improvement of teaching practice
  • Learner focused – determine whether the course outcomes were achieved

Evaluation be conducted at any time, depending on the purpose:

  • At the beginning to establish prior knowledge (diagnostic)
  • In the middle to check understanding (formative) e.g. think-pair-share, clickers, minute paper, blogs, reflective writing
  • At the end to determine the effectiveness of the course / to determine whether outcomes have been achieved (summative) e.g. questionnaires, interviews, debriefing sessions, tests

Obtaining information:

  • Feedback from students
  • Peer review of teaching
  • Self-evaluation

References

  • Knight (n.d.). A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment
  • Morgan (2008). The Course Improvement Flowchart: A description of a tool and process for the evaluation of university teaching

CHEC short course: teaching and learning

Today was the first day of a short course looking at teaching and learning and is pretty innovative in that it is co-ordinated by, and open to, academics from several higher educational institutions in the Western Cape. It’s being organised by the Cape Higher Education Consortium (CHEC). The course runs for the next month, during which we attend a session a week, and includes an assignment component. In this case, the assignment is to develop and evaluate a teaching activity using principles from the course.

The content of the course is aimed at new lecturers or those with experience who’d like to explore new ideas in their teaching practices. I thought it’d be interesting to engage with people from other institutions and see what I could learn from them. The sessions are really short so there isn’t much time to cover a lot of ground. However, the interaction seemed pretty good today. Most of the notes below were thoughts I had that were inspired by what was said, and not really content from the session.

What do teachers and students do to create learning spaces?

Students’ learning behaviour is a response to the education system they’re a part of

Perceived relevance influences participation (it’s not necessarily about actual relevance)

Challenging boundaries can develop critical thinking

Definitions of learning are context dependent i.e. it’s hard to pin down a definition of what it means “to learn”. Remembering a fact is different to more efficiently performing a task, but both are “learning”

Bloom’s taxonomy implies that certain “types” of learning are more developed than others, but “Evaluation” can be done at a basic level, and “Remembering” can be complex

How do you enable self-expression as a means of developing creativity / engagement?

When we mediate teaching and learning experiences with technology, are we producing a fundamentally different thinking process? If we are, then “e-learning” isn’t just about using technology…then it really is something different that should stand alone

How does “what students do” impact on how they think? How can I make better use of our learning spaces to change students’ thinking?

How do you get students to prepare for class, engage during class, and follow up (reflect) after class, in order to reach specific learning objectives?

If you give homework, do you need to make sure that students do it? If the homework task is designed to develop thinking, and then you assess the students’ ability to think, doing the homework task stops being work for the sake of work. Completing the homework then has a real positive outcome in terms of facilitating deeper understanding, which increases the probability of the student being deemed “competent”, which makes them more likely to do the homework.

Twitter Weekly Updates for 2011-07-04

  • U.N. Report Declares Internet Access a Human Right | Threat Level | Wired.com http://bit.ly/ivNke2 #
  • #saahe2011 officially over. It was a wonderful conference made possible by the participation of health educators from all over the country #
  • Papert http://bit.ly/mggi6R. Being a revolutionary means seeing far enough ahead to know that there is going to be a fundamental change #
  • Papert http://bit.ly/le70h7. The impact of paper in education has led to the exclusion of those who don’t think in certain ways #
  • @dkeats When people are “experts” in a domain they can be blinded to great ideas in other fields and so miss opportunities to drive change #
  • @dkeats Agreed. I’ve had to work really hard to convince people in my dept that I’m not the “computer guy”, I’m the “education guy” #
  • Innovation is about linking concepts from different fields to solve problems, its not about doing the same thing with more efficiency #
  • “How do you learn enough of the words to make sense of the discipline?” #saahe2011 #
  • Presentation by David Taylor on the use of adult learning theories #saahe2011 #
  • Jack Boulet speaking about the challenges and opportunities in simulation-based assessment #saahe2011 #
  • Mendeley Desktop 1.0 Development Preview Released http://ow.ly/1ueXSs #
  • Social media is inherently a system of peer evaluation and is changing the way scholars disseminate their research http://ow.ly/1ueXMA #
  • @dkeats Wonder if the problem has to do with the fact that much “ed tech” is designed by Comp Scientists, rather than Social Sci? #
  • @dkeats Also, people have the idea that LMSs have something to do with T&L, & then struggle when it can’t do what they need it to #
  • @dkeats To qualify, the problem isn’t resistance, its misunderstanding. The conversation always ends up being about technology #
  • There’s a huge difference between “learning” & “studying”, not in terms of the process but ito motivation & objectives #
  • @thesiswhisperer conf is for health educators, mostly clinicians, many of whom are amazing teachers but for whom tech is misunderstood #
  • In a workshop with David Taylor, looking at using adult learning theories #saahe2011 #
  • Blackboard is a course management system, it has little to do with learning. Use it for what its designed for #saahe2011 #
  • Trying to change perception that technology-mediated teaching & learning isn’t about technology. Not going well #saahe2011 #
  • Just gave my presentation on the use of social networks to facilitate clinical & ethical reasoning in practice contexts #saahe2011 #
  • Deborah Murdoch Eaton talks about the role of entrepreneurship to innovate in health education #saahe2011 #
  • Social accountability is relevant for all health professions (healthsocialaccountability.org) #saahe2011 #
  • Charles Boelen talks about social accountability at #saahe2011 keynote, discusses its role in meeting society’s health needs #
  • First day of #saahe2011 over. Lots of interesting discussion and some good research being done in health science education #
  • Concept mapping workshop turned out OK. Got a CD with loads of useful information…a first for any workshop I’ve attended #saahe2011 #
  • Many people still miss the point when it comes to technology-mediated teaching & learning. Your notes on an LMS is not teaching or learning #
  • At a workshop on concept mapping, lots of content being delivered to me, not much practical yet #saahe2011 #
  • Noticed a trend of decreasing satisfaction from 1-4 year, even though overall scores were +. Implications for teaching? #saahe2011 #
  • Banjamin van Nugteren: do medical students’ perceptions of their educational environment predict academic performance? #saahe2011 #
  • Selective assignment as an applied education & research tool -> gain research exp, improve knowledge & groupwork #saahe2011 #
  • Reflective journaling: “as we write conscious thoughts, useful associations & new ideas begin to emerge” #saahe2011 #
  • Change paradigm from “just-in-case” learning to “just-in-time” learning #saahe2011 #
  • Benefits of EBP are enhanced when principles are modelled by clinicians #saahe2011 #
  • EBP less effective when taught as a discrete module. Integration with clinical practice shows improvements across all components #saahe2011 #
  • Students have difficulty conducting appraisals of online sources <- an enormous challenge when much content is accessed online #saahe2011 #
  • Looking around venue at #saahe2011 10 open laptops, 2 visible iPads (lying on desk, not being used), about 350 participants…disappointing #
  • EBP isn’t a recipe (or a religion), although that is a common misconception #saahe2011 #
  • Prof. Robin Watts discusses EBP and facilitating student learning. EBP isn’t synonymous with research #saahe2011 #
  • “A lecture without a story is like an operation without an anaesthetic” Athol Kent, #saahe2001 #
  • Kent drawing heavily on Freni et al, 2010, Health professionals for a new century, Lancet. #
  • #saahe2001 has begun. Prof. Athol Kent: the future of health science education #
  • Portfolios and Competency http://bit.ly/jfFpfU. Really interesting comments section. Poorly implemented portfolios aren’t worth much #
  • @amcunningham I think that portfolios can demonstrate competence and be assessed but it needs a change in mindset to evaluate them #
  • @amcunningham will comment on the post when I’m off the road #
  • @amcunningham Can’t b objective as I haven’t used NHS eportfolio. Also, its hard 2 structure what should be personally meaningful experience #
  • @amcunningham Portfolios must include reflection, not just documentation. Reflection = relating past experience to future performance #
  • @amcunningham Your delusion question in the link: practitioners / students not shown how to develop a portfolio with objectives #
  • @amcunningham Also spoke a lot about competency-based education and strengths / limitations compared to apprentice-based model #
  • @amcunningham Very much. Just finished a 4 day workshop that included the use of portfolios as reflective tools in developing competence #
  • Final day of #safri 2011 finished. Busy with a few evaluations now. Spent some time developing the next phase of my project. Tired… #
  • Last day of #safri today, short session this morning, then leaving for #saahe2011 conference in Potchefstroom. It’s been an intense 5 days #
  • Papert: Calling yourself some1 who uses computers in education will be as ridiculous as calling yourself some1 who uses pencils in education #
  • Daily Papert http://bit.ly/jKlVmn. 10 years ago, Papert warned against the “computers in education” specialist. How have we responded? #
  • Daily Papert http://bit.ly/m7rfYY. Defining yourself as someone who uses computers in education, is to subordinate yourself #
  • YouTube – Augmented Reality Brain http://bit.ly/kcZWXy. When this is common in health education, things are going to get crazy #
  • @rochellesa Everyone needs some downtime, especially at 10 at night when you’re out with your wife 🙂 Seems like a nice guy, very quiet #
  • @rochellesa The large policeman he’s with isn’t keen tho. Mr Nzimande has asked 2 not b disturbed. Understandable when u want to chill out #
  • I’m sitting in a hotel in Jo’burg & Minister of Higher Education Blade Nzimande walks in and sits down next to me. Any1 have any questions? #

Seminar on Inter-professional Education (IPE)

A few days ago I attended a lunchtime seminar on the value and impact of Interprofessional in health sciences education, presented by Professor Hugh Barr. I unfortunately couldn’t stay for the duration of the discussion, but I took a few notes while I was there.

“Interprofessional education (IPE) is sophisticated”. I like this because it seems that we sometimes take the stance that IPE is about putting students from different disciplines in the same room and telling them to learn about each other. It became clear during the discussion just how complex IPE is.

What opportunities exist for curriculum development in the context of IPE? What are the conversations that are happening in the classrooms around interprofessional collaboration? How can those experiences be leveraged by students and educators?

View from Sir Lowries Pass on the way to supervise students on clinical placement in Grabouw.

We place groups of 3rd year students in a rural community about an hour outside of Cape Town, and part of that clinical rotation is to try and collaborate with students from other domains. The effort is overseen (in theory) by the Interdisciplinary Teaching and Learning Unit, although in practice there are many challenges. The biggest problem, at least as reported by students, is a lack of shared objectives between the groups. Even though they have time allocated during the week in which to work together on shared projects, the individual programmes from the various departments have little in the way of real overlap. This often leads to frustration and a high attrition rate of departments dropping out of the collaborative part of the exercise.

In terms of showcasing examples of collaborative work, which ones aren’t too expensive or challenging, which have good outcomes and can serve to promote the approach i.e. what is the low-hanging fruit?

“small is beautiful”

One of the benefits of IPE is the idea that complex social and health problems in communities are beyond the capacity of any one profession to solve.

Formal publication in peer-reviewed journals isn’t the only set of outcomes to aim for. Interesting and relevant information that isn’t grounded in evidence and theory should also be shared. I liked the emphasis that Professor Barr placed on informal dissemination of information by alternative means.

On the question of how to break the dominance of medics in driving health strategy, Professor Barr suggested developing collaborative approaches while trying to integrate the medics, not alienating them and, if that failed, to move forward without them. We have at least one situation though, where medical students are driving the process the IPE in a rural community that our students are placed in. There are plenty of examples where the medics are not only willing to participate but are actually leading the way.

“Research what you teach. Teach what you research” – Professor Renfrew Christie, Dean of Research

We need to acknowledge and understand that IPE in undergraduate education is only a first step towards real collaborative practice in health systems. It’s too much to expect that after a month or two of spending time together, our students will simply know how to develop shared objectives and interventions with other professions.

Developing cases for Problem-Based Learning

Workshop on the development of case-based studies

Facilitators: Dr. Ethel Stanley, Dr. Margaret Waterman

Part of my PhD will be to look at alternative approaches to clinical education, including uses cases in problem-based learning (PBL). My specific interest is in the use of emerging technology to design and teach with those cases in small groups. Unfortunately I was only able to attend the first half of the workshop, and didn’t get the opportunity to develop my own case.

Here are my notes from the workshop:

Biology is an important topic for everyone to understand, as it impacts on every major health-related decision that has to be made, so we used biological case studies as working examples

Students must be able to ask good questions in order to solve their own problems in preparation for the types of adult learning (androgogy, as opposed to pedagogy) behaviour we’d expect to see in practice. Memorising content isn’t a good strategy for learning how to solve problems like “Why is this patient walking in a way that is different from “normal”?”

A lecture is a good method to deliver content, but is a poor method for active learning around problem solving

Case-based learning (CBL) is a good way to explore realistically complex situations

Begin by introducing a problem with no expectation that the student can solve the problem. Use that as a springboard to explore their ability to develop good research questions

CBL requires the confidence from teachers to give up control, but giving up control is the only way to get students to actively construct their own learning experiences by asking questions, gathering information, testing hypotheses, and convince others of their findings

Structure for working through a basic case

  • Define the boundaries / outline of the case
  • What do you already know (group knowledge, as well as information that can be obtained from the case study) / what do you still need to know (this can be used as a basis for a short lecture) in order to answer the question
  • Choose the most important questions to explore
  • Get into small groups and discuss / share information, knowledge, assumptions
  • Go away and try to answer the questions that were generated
  • Come back and only then get the teachers objectives
  • Then go away again and refine the questions and information collected

Why use cases?

  • To initiate investigations
  • To use new technologies and resources to solve problems
  • Develop local and international / global perspectives
  • Emphasise the value of interdisciplinary and collaborative approaches
  • Structure student assessment through student products
  • Support diverse objectives within a shared workspace (would be interesting to investigate the possibility of using a wiki to develop and build on cases using this approach)

Used Gapminder to demonstrate alternative ways of visually representing data while working through a case study. See Hans Rosling (founder of Gapminder) on the Joy of Statistics, and his TED presentations.

The teacher can set the context of the class, and the depth to which students should explore questions, by using an appropriate framework / case. Can also decide which questions are prioritised, and which ones can be answered via different methods e.g. lecture, essay, assignment, etc.

Highlight the fact that, as the teacher, you don’t have all the answers and that you’re a co-learner in the classroom. Students should understand that the teacher isn’t a font of all knowledge on the subject, and that it’s acceptable and appropriate for the teacher to have to also do research on the topic

Twitter Weekly Updates for 2010-08-30