Categories
assessment

Assessing Clinical Competence with the Mini-CEX

This is the first draft of an article that I published in The Clinical Teacher mobile app.

Introduction

The assessment of clinical competence is an essential component of clinical education but is challenging because of the range of factors that can influence the outcome. Clinical teachers must be able to make valid and reliable judgements of students’ clinical ability, but this is complex. The more valid and reliable a test is, the longer and more complicated it is to administer. The mini Clinical Evaluation Exercise, or mini-CEX was developed in response to some of the challenges of the traditional clinical evaluation exercise (CEX) and has been found to be a feasible, valid and reliable tool for the assessment of clinical competence.

Assessment of competence

Competence in clinical practice is defined as the “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served” (Epstein & Hundert, 2002). The assessment of competence can take a range of forms in clinical education, but this article will only discuss competence around the physical examination of patients.

Teaching physical examination skills is a unique challenge in clinical education because of the many variables that impact on how it is conducted. Consider how each of the following factors plays a role in the quality of teaching and learning that happens; the teachers’ own clinical skills; trainees’ prior knowledge, skills and interest; availability of patients with the necessary findings; patient willingness to be examined by a group of doctors and trainees who may not have any impact on their clinical care; the physical environment which is usually less than comfortable; and trainee fatigue level. In addition, the session should be relevant to the student and have significant educational value, otherwise there is the risk that it will degenerate into a “show and tell” exercise (Ramani, 2008).

This article will demonstrate how the mini-CEX provides a structured way to achieve the following goals of clinical assessment (Epstein, 2007):

  • Optimise the capabilities of all learners and practitioners by providing motivation and direction for future learning
  • Protect the public by identifying incompetent physicians
  • Provide a basis for choosing applicants for advanced training

The mini-Clinical Evaluation Exercise

The mini-CEX is a method of assessing the clinical competence of students in an authentic clinical setting, while at the same time providing a structured means of giving feedback to improve performance. It involves the direct observation of a focused clinical encounter between a student and patient, followed immediately with structured feedback designed to improve practice. It was developed in response to the shortcomings of both the traditional bedside oral examination and initial clinical evaluation exercise (CEX) (Norcini, 2005).

In the mini-CEX, the student conducts a subjective and objective assessment of a patient, focusing on one aspect of the patients presentation, and finishing with a diagnosis and treatment plan. The clinician scores the students’ performance on a range of criteria using the structured form, and provides the student with feedback on their strengths and weaknesses. The clinician highlights an area that the student can improve on, and together they agree on an action the student can take that will help them in their development. This can include a case presentation at a later date, a written exercise that demonstrates clinical reasoning, or a literature search (Epstein, 2007).

The session is relatively short (about 15 minutes) and should be incorporated into the normal routine of training. Ideally, the student should be assessed in multiple clinical contexts by multiple clinicians, although it is up to the student to identify when and with whom they would like to be assessed (Norcini, 2005). Students should be observed at least four times by different assessors to get a reliable assessment of competence (Norcini & Burch, 2007). The mini-CEX is a feasible, valid and reliable assessment tool with high fidelity for the evaluation of clinical competence (Nair, et al., 2008).

The mini-CEX is a good example of a workplace-based assessment method that fulfils three requirements for facilitating learning (Norcini & Burch, 2007):

  1. The course content, expected competencies and assessment practices are aligned
  2. Feedback is provided either during or immediately after the assessment
  3. The assessment is used to direct learning towards desired outcomes

Structure of a mini-CEX form

Each of the competences in Table 1 below is assessed on 9-point scale where 1-3 are “unsatisfactory”, 4 is “marginal”, 5-6 are “satisfactory”, and 7-9 are “superior” (Norcini, et al., 2005). In addition to the competences documented below, there is also space for both student and assessor to record their experience of the assessment, indicating their satisfaction with the process, time taken for the encounter and experience of the assessor.

Table 1: Competencies and descriptors of the mini-CEX form

Competence

Descriptor of a Satisfactory Trainee

History taking

Facilitates patient’s telling of story, effectively uses appropriate questions to obtain accurate, adequate information, responds appropriately to verbal and non-verbal cues.

Physical exam

Follows efficient, logical sequence; examination appropriate to clinical problem, explains to patient; sensitive to patient’s comfort, modesty.

Professionalism

Shows respect, compassion, empathy, establishes trust; Attends to patient’s needs of comfort, respect, confidentiality. Behaves in an ethical manner, awareness of relevant legal frameworks. Aware of limitations.

Clinical judgement

Makes appropriate diagnosis and formulates a suitable management plan. Selectively orders/ performs appropriate diagnostic studies, considers risks, benefits.

Communication skill

Explores patient’s perspective, jargon free, open and honest, empathetic, agrees management plan/therapy with patient.

Organisation/efficiency

Prioritises; is timely. Succinct. Summarises.

Overall clinical care

Demonstrates satisfactory clinical judgment, synthesis, caring, effectiveness. Efficiency, appropriate use of resources, balances risks and benefits, awareness of own limitations.

Role of the assessor

The assessor does not need to have prior knowledge or experience with assessing the student, but should have some experience in the domain of expertise that the assessment is relevant for. The patient must be made aware that the mini-CEX is going to used to assess a student’s level of competence with them, and they should give consent for this to happen. It is important to note that the session should be led by the trainee, not the assessor (National Health Service, n.d.).

The assessor must also ensure that the patient and assessment task selected is an appropriate example of something that the student would reasonably be expected to be able to do. Remember that the mini-CEX is only an assessment of competence within a narrow scope of practice, and therefore only a focused task will be assessed. They should also record the complexity of the patient’s problem, as there is some evidence that assessors score students higher on cases of increased complexity (Norcini, 2005).

After the session has been completed, the assessor must give feedback to the student immediately, highlighting their strengths as well as areas in which they can improve. Together, clinician and student must agree on an educational action that the student can take in order to improve their practice. It is also recommended that assessors go on at least a basic workshop to be introduced to the mini-CEX. Informal discussion is likely to improve both the quality of the assessment and of the feedback to students (Norcini, 2005).

Advantages of the mini-CEX

The mini-CEX also has these other strengths:

  • It is used in the clinical context with real patients and clinician educators, as opposed to the Objective Structured Clinical Exam (OSCE), which uses standardised patients.
  • It can be used in a variety of clinical settings, including the hospital, outpatient clinic and trauma, and while it was designed to be administered in the medical field, it is equally useful for most health professionals. The broader range of clinical challenges improves the quality of the assessment and of the educational feedback that the student receives.
  • The assessment is carried out by a variety of clinicians, which improves the reliability and validity of the tool, but also provides a variety of educational feedback for the student. This is useful because clinicians will often have different ways of managing the same patient, and it helps for students to be aware of the fact that there is often no single “correct” way of managing a patient.
  • The assessment of competence is accompanied with real, practical suggestions for improvement. This improves the validity of the score given and provides constructive feedback that the student can use to improve their practice.
  • The process provides a complete and realistic clinical assessment, in that the student must gather and synthesise relevant information, identify the problem, develop a management plan and communicate the outcome.
  • It can be included in students’ portfolio as part of their collection of evidence of general competence
  • The mini-CEX encourages the student to focus on one aspect of the clinical presentation, allowing them to prioritise the diagnosis and management of the patient.

Challenges when using the mini-CEX

There is some evidence that assessor feedback in terms of developing a plan of action is often ignored, negating the educational component of the tool. In addition, many students often fail to reflect on the session and to provide any form of self-evaluation. It is therefore essential that faculty training is considered part of an integrated approach to improving students’ clinical competence, because the quality of the assessment is dependent on faculty skills in history and physical exam, demonstration, observation, assessment and feedback (Holmboe, et al., 2004a). Another point to be aware of when considering the use of the mini-CEX is that it doesn’t allow for the comprehensive assessment of a complete patient examination (Norcini, et al., 2003).

Practice points

  • The mini-CEX provides a structured format for the assessment of students’ clinical competence within a focused physical examination of a patient
  • It is a feasible, valid and reliable method of assessment when it is used by multiple assessors in multiple clinical contexts over a period of time
  • Completion of the focused physical examination should be followed immediately by the feedback session, which must include an activity that the student can engage in to improve their practice

Conclusion

The mini-CEX has been demonstrated to be a valid and reliable tool for the assessment of clinical competence. It should be administered by multiple assessors in multiple clinical contexts in order for it to achieve its maximum potential as a both an assessment and educational tool.

 

References and sources

Categories
assessment

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ

Categories
mobile

Twitter Weekly Updates for 2012-06-11

Categories
assessment curriculum education research teaching

SAFRI 2011 (session 2) – day 4

Reliability and validity

Validity

Important for assessment, not only for research

It’s the scores that are valid and reliable, not the instrument

Sometimes the whole is greater than the sum of the parts e.g. when a student gets all the check marks but doesn’t perform competently overall e.g. the examiner can tick each competency being assessed but the student doesn’t establish rapport with the patient. Difficult to address

What does the score mean?

Students are efficient in the use of their time i.e. they will study what is being assessed because the inference is that we’re assessing what is important

Validity can be framed as an “argument / defense” proposition

Our Ethics exam is a problem of validity. Written tests measure knowledge, not behaviour e.g. students can know and report exactly what informed consent is and how to go about getting it, but may not pay it any attention in practice. How do we make the Ethics assessment more valid?

Face” validity doesn’t exist, it’s more accurately termed “content” validity. “Face” validity basically amounts to saying that something looks OK

What are the important things to score? Who determines what is important?

There are some things that standardised patients can’t do well e.g. trauma

Assessment should sample more broadly from a domain. This improves validity and also students don’t feel like they’ve wasted their time studying things that aren’t assessed. The more assessment items we include, the more valid the results

Scores drop if timing of assessment is inappropriate e.g. too much or too little time → lower scores as students either rush or try to “fill” the time something that isn’t appropriate for the assessment

First round scores in OSCEs are often lower then later rounds

Even though the assessment is meant to indicate competence, there’s actually no way to predict if practitioners are actually competent

Students really do want to learn!

Reliability

We want to ensure that a students observed score is a reasonable reflection of their “true ability”

In reliability assessments, how do you reduce the learning that occurs between assessments?

In OSCEs, use as many cases / stations as you can, and have different assessor for each station. This is the most effective rating design

We did a long session on standard setting, which was fascinating especially when it came to having to defend the cut-scores of exams i.e. what criteria do we use to say that 50% (or 60 or 70) is the pass mark? What data do we have to defend that standard?

Didn’t even realise that this was something to be considered, good to know that methods exist to use data to substantiate decisions made with regards to standards that are set (e.g. Angoff Method)

Should students be able to compensate for poor scores in one area, with good scores in another. Should they have to pass every section that we identify as being important? If it’s not important, why is it being assessed?

Norm-referenced critera are not particularly useful to determine competence. Standards should be set according to competence, not according to the performance of others

Standard setting panels shouldn’t give input on the quality of the assessment items

You can use standard setting to lower the pass mark in a difficult assessment, and to raise the pass mark in an easier exam

Alignment of expectations with actual performance

Setting up an OSCE

  • Design
  • Evaluate
  • Logistics

Standardised, compartmentalised (i.e. not holistic), variables removed / controlled, predetermined standards, variety of methods

Competencies broken into components

Is at the “shows how” part of Miller’s pyramd (Miller, 1990, The assessment of clinical skills, Academic Medicine, 65; S63-S67)

Design an OSCE, using the following guidelines:

  • Summative assessment for undergraduate students
  • Communication skill
  • Objective
  • Instructions (student, examiner, standardised patient)
  • Score sheet
  • Equipment list

Criticise the OSCE stations of another group

 

Assessing clinical performance

Looked at using mini-CEX (clinical evaluation exercise)

Useful for formative assessment

Avoid making judgements too soon → your impression may change over time