Categories
assessment curriculum teaching

Simon Barrie presentation on Graduate Attributes

“Curriculum renewal to achieve graduate learning outcomes: The challenge of assessment”
Prof Simon Barrie, Director of T&L, University of Sydney

Last week I had the opportunity to attend a presentation on graduate attributes and curriculum renewal by Prof Simon Barrie. The major point I took away from it was that we need to be thinking about how to change teaching and assessment practices to make sure that we’re graduating the kinds of students we say we want to. Here are the notes I took.

campuslife

Assessment is often a challenge when it comes to curriculum renewal. The things that are important (e.g. critical thinking) are hard to measure. Which is why we often don’t even try.

Curriculum is more powerful than simply looking at T&L, although bringing in T&L is an essential aspect of curriculum development. Is curriculum renewal just “busy bureaucracy”? It may begin with noble aims but it can degenerate into managerial traps. Curriculum renewal and graduate attributes (GA) should be seen as part of a transformative opportunity.

GA are complex “things” and need to be engaged with in complex ways

GA should be focused on checking that higher education is fulfilling it’s social role. UNESCO World Declaration on Higher Education: “higher education has given ample proof of it’s viability over the centuries and of its ability to change and induce change and progress in society”.

GA should be a starting point for a conversation about higher education. If they exist simply as a list of outcomes, then they haven’t achieved their purpose.

How is an institution’s mission embodied in the learning experiences of students and teaching experiences of teachers?

What is the “good” of university?

  • Personal benefit – work and living a rich and rewarding life
  • Public benefit – economy and prosperity, social good
  • The mix of intended “goods” can influence our descriptions of the sorts of graduates that universities should be producing and how they should be taught and assessed. But, the process of higher education is a “good” in itself. The act of learning can itself be a social good e.g. when students engage in collaborative projects that benefit the community.

Universities need to teach people how to think and to question the world we live in.

If you only talk to people like you about GA, you develop a very narrow perspective about what they are. Speaking to more varied people, you are exposed to multiple set of perspectives, which makes curriculum renewal much more powerful. We bring our own assumptions to the conversation. Don’t trust your assumptions. Engage with different stakeholders. Don’t have the discussion around outcomes, have it around the purpose and meaning of higher education.

A framework for thinking about GA: it is complex and not “one size fits all”. Not all GA are at the same “level”, there are different types of “understand”, which means different types of assessment and teaching methods.

  • Precursor: approach it as a remedial function, “if only we got the right students”
  • Complementary: everybody needs “complementary” skills that are useful but not integral to domain-specific knowledge
  • Translation: applied knowledge in an intentional way, should be able to use knowledge, translating classroom knowledge into real world application, changing the way we think about the discipline
  • Enabling: need to be able to work in conditions of uncertainty, the world is unknowable, how to navigate uncertainty, develop a way of being in the world, about openness, going beyond the discipline to enable new ways of learning (difficult to pin down and difficult to teach, and assess, hard to measure)

The above ways of “understanding” are all radically different, yet many are put on the same level and taught and assessed in the same way. Policies and implementation needs to acknowledge that GA are different.

Gibbons: knowledge brought into the world and made real

The way we talk about knowledge can make it more or less powerful. Having a certain stance or attitude towards knowledge will affect how you teach and assess.

What is the link, if any, between the discipline specific lists and institutional / national higher education lists?

The National GAP – Graduate Attribute Project

What are the assessment tasks in a range of disciplines that generate convincing evidence of the achievement of graduate learning outcomes? What are the assurance processes trusted by disciplines in relation to those assessment tasks and judgments? Assessing and assuring graduate learning outcomes (AAGLO project). Here are the summary findings of the project.

Assessment for learning and not assessment of learning.

Coherent development and assessment of programme-level graduate learning outcomes requires an institutional and discipline statement of outcomes. Foundation skills? Translation attributes? Enabling attributes and dispositions? Traditional or contemporary conceptions of knowledge?

Assessment not only drives learning but also drives teaching.

  • Communication skills – Privileged
  • Information literacy – Privileged
  • Research and inquiry – Privileged
  • Ethical social professional understandings – Ignored (present in the lists, but not assessed)
  • Personal intellectual autonomy – Ignored (present in the lists, but not assessed)

Features of effective assessment practices:

  • Assessment for learning
  • Interconnected, multi-component, connected to other assessment, staged, not isolated
  • Authentic (about the real world), relevant (personally to the student), roles of students and assessors
  • Standards-based with effective communication of criteria, assessment for GA can’t be norm-referenced, must be standards-based
  • Involve multiple decision makers – including students
  • Programme level coherence, not just an isolated assessment but exists in relation to the programme

The above only works as evidence to support learning if it is coupled with quality assurance

  • Quality of task
  • Quality of judgment (calibration prior to assessment, and consensus afterwards)
  • Confidence

There is a need for programme-level assessment. Assessment is usually focused at a module level. There’s no need to assess on a module level if your programme level is effective. You can then do things like have assessments that cross modules and are carried through different year levels.

How does a university curriculum, teaching and learning effectively measure the achievement of learning outcomes? In order to achieve certain types of outcomes, we need to give them certain types of learning experiences.

Peter Knights “wicked competencies”: you can’t fake wickedness – it’s got to be the real thing, messy, challenging and consequential problems.

The outcomes can’t be used to differentiate programmes, so use teaching and learning methods and experiences to differentiate.

Stop teaching content. Use content as a framework to teach other things e.g. critical thinking, communication, social responsibility

5 lessons:

  1. Set the right (wicked) goals collaboratively
  2. Make a signature pedagogy for complex GA part of the 5 year plan
  3. Develop policies and procedures to encourage and reward staff
  4. Identify and provide sources of data that support curriculum renewal, rather than shut down conversations about curriculum
  5. Provide resources and change strategies to support curriculum renewal conversations

Teaching GA is “not someone else’s problem”, it needs to be integrated into discipline-specific teaching.

Be aware that this conversation is very much focused on “university” or “academic” learning, and ignores the many different ways of being and thinking that exist outside the university. How is Higher Education connecting with the outside world? Is there a conversation between us and everyone else?

We try to shape students into a mold of what we imagine they should be. We don’t really acknowledge their unique characteristics and embrace their potential contribution to the learning relationship?

We (academics) are also often removed from where we want our students to be. Think about critical thinking, inquiry-based learning, collaboration, embracing multiple perspectives. Is that how we learn? Our organisational culture drives us away from the GA we say we want our students to have.

Resources

Categories
assessment

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ