Assessing teams instead of individuals

Patient outcomes are almost always influenced by how well the team works together, yet all of the disciplines conduct assessments of individual students. Yes, we might ask students who they would refer to, or who else is important in the management of the patient, but do we ever actually watch a student talk to a nurse, for example? We assess communication skills based on how they interact with the patient, but why don’t we make observations of how students communicate with other members of the team when it comes to preparing a management plan for the patient?

What would an assessment task look like if we assessed teams, rather than individuals. What if we we asked an OT, physio and SALT student to sit down and discuss the management of a patient? Imagine how much insight this would give us in terms of students’ 1) interdisciplinary knowledge, 2) teamwork, 3) communication skills, 4) complex clinical reasoning, and 5) patient-centred practice? What else could we learn in such an assessment? I propose that we would learn a lot more about power relations between the students in different disciplines. We might even get some idea of students’ levels of empathy for peers and colleagues, and not just patients.

What are the challenges to such an assessment task? There would be logistical issues around when the students would be available together, setting concurrent clinical practice exams, getting 2-3 examiners together (if the students are going to be working together, so should the examiners). What else? Maybe the examiners would realise that we have different expectations of what constitutes “good” student performance. Maybe we would realise that our curricula are not aligned i.e. that we think about communication differently? Maybe even – horror – that we’re teaching the “wrong” stuff. How would we respond to these challenges?

What would the benefits be to our curricula? How much would we learn about how we teach? We say that our students graduate with skills in communication, teamwork, conflict resolution, etc? But how do we know? With the increasing trend of institutions talking about interprofessional education, I would love to hear what they have to say about interprofessional assessment in the hospital with real patients (And no, having students from the different disciplines do a slideshow presentation on their research project doesn’t count). Or, assessment of the students working together with community members in rural areas, where we actually watch them sit down with real people and observe their interactions.

If you have any thoughts on how to go about doing something like this, please get in touch. I’d love to talk about some kind of collaborative research project.

Are we gatekeepers, or locksmiths?

David Nicholls at Critical Physiotherapy recently blogged about how we might think about access to physiotherapy education, and offers the metaphor of a gated community as one possibility.

The staff act as the guards at the gateway to the profession and the gate is a threshold across which students pass only when they have demonstrated the right to enter the community.

This got me thinking about the metaphors we use as academics, particularly those that guide how we think about our role as examiners. David’s post reminded of a conversation I had with a colleague soon after entering academia. I was working as an external clinical examiner for a local university and we were evaluating a 3rd year student who had not done very well in the clinical exam. We were talking about whether the student had demonstrated enough of an understanding of the management of the patient in order to pass. My colleague said that we shouldn’t feel bad about failing the student because “we are the gatekeepers for the profession”. The metaphor of gatekeeper didn’t feel right to me at the time and over the next few years I struggled with the idea that part of my job was to prevent students from progressing through the year levels. Don’t get me wrong, I’m not suggesting that we allow incompetent students to pass. My issue was with how we think about our roles as teachers and where the power to determine progression lies.

gatekeeper
I imagine that this is how many students think of their lecturers and clinical examiners: mysterious possessors of arcane, hidden knowledge.

A gatekeeper is someone who has power to make decisions that affect someone who does not. In this metaphor, the examiner is the gatekeeper who decides whether or not to allow a student to cross the threshold. Gate keeping is about control, and more specifically, controlling those who have less power. From the students’ perspective, the idea of examiner-as-gatekeeper moves the locus of control externally, rather than acknowledging that success is largely determined by one’s motivation. It is the difference between taking personal responsibility for not doing well, or blaming some outside factor for poor performance (“The test was too difficult; The examiner was too strict”; The patient was non-compliant”).

As long as we are the gatekeepers who control students’ progress through the degree, the locus of control exists outside of the student. They do the work and we either block them or allow them through. We have the power, not students. If they fail, it is because we failed them. It is far more powerful – and useful for learning – for students to take on the responsibility for their success or failure. To paraphrase from my PhD thesis:

If knowledge can exist in the spaces between people, objects and devices, then it exists in the relationships between them. [As lecturers, we should] encourage collaborative, rather than isolated activity, where the responsibility for learning is be shared with others in order to build trust. Facilitators must be active participants in completing the activities, while emphasising that students are partners in the process of teaching and learning, because by completing the learning activity together students are exposed to the tacit, hidden knowledge of the profession. In this way, lecturers are not authority figures who are external to the process of learning. Rather than being perceived as gatekeepers who determine progression through the degree by controlling students’ access to knowledge, lecturers can be seen as locksmiths, teaching students how to make their own keys, as and when it is necessary.

By thinking of lecturers (who are often also the examiners) as master locksmiths who teach students how to make their own keys, we are moving the locus of control back to the student. The gates that mark thresholds to higher levels of the profession still exist, as they should. It is right that students who are not ready for independent practice should be prevented from doing so. However, rather than thinking of the examiner as a gatekeeper who prevents the student from crossing the threshold, we could rather think of the student as being unable to make the right key. The examiner is then simply an observer who recognises the student’s inability to open the gate. It is the student who is responsible for poor performance and not the examiner who is responsible for failing the student.

I therefore suggest that the gatekeeper metaphor for examiners be replaced with that of a locksmith, where students are regarded as apprentices and novice practitioners who are learning a craft. From this perspective we can more carefully appreciate the interaction that is necessary in the teaching and learning relationship, as we guide students towards learning how to make their own keys as they control their own fate.

mwI4tl4

Caveat: if we are part of a master-apprentice relationship with students, then their failure must be seen as our failure too. If my student cannot successfully create the right key to get through the gate, I must faithfully interrogate my role in that failure, and I wonder how many of us would be comfortable with that.

Thanks to David, who posted Physiotherapy Education as a Gated Community, and for stimulating me to think more carefully about how the metaphors we use inform our thinking and our practice.

Abstract: Student Success and Engagement project

Our faculty has implemented a 3 year research project looking at improving Student Success and Engagement in the faculty. The project is being coordinated across several departments in the faculty and is the first time that we are collaborating on this scale. I will be using this blog as a public progress report of the project, in order to highlight our processes and challenges, as well as to report on draft findings. Here is the abstract of the project proposal.

Achieving promising throughput rates and improving retention remains a challenge for most higher education institutions. Student success in South African higher education has been unsatisfactory and universities have not been effective in developing strategies to enhance students’ learning experiences. Low throughput and poor retention rates have been identified as challenges in the Faculty of Community and Health Sciences at UWC. While success rates of students in the faculty are reported as students who complete their qualification in the shortest possible time, many students require an additional year to graduate. It is important to develop strategies that exploit students’ capacity to engage in their learning, as this may create a space that is conducive to student success. Therefore, the aim of this project is to identify and implement strategies to improve student success in the CHS faculty at UWC through an exploration of student and lecturer engagement. This project will explore student engagement in relation to the domains of assessment, academic literacies and tutoring.

Design-based research has been selected the overarching method as it is informed by the teacher’s desire to improve learning, based on sound theoretical principles. All of the undergraduate students (N=2595) and lecturers in the CHS faculty will be invited to participate in this study. Phase 1 includes the implementation of the South African Survey for Student Engagement (SASSE) and the Lecturer Survey for Student Engagement (LSSE). We will also conduct in-depth interviews and focus group discussions among key informants, who are likely to have insight into the challenges experienced in the areas of assessment, literacy and tutoring, and will be identified through purposive sampling. In addition, document analyses of UWC Assessment policy, Teaching and Learning policy and the Charter of Graduate Attributes will be conducted.

During phase 2, a systematic review will be conducted in order to ascertain which interventions have been demonstrated to increase student engagement in higher education. This data will be combined with the insights gained from Phase 1, and used to inform a series of workshops and seminars in the faculty, aimed at developing and refining principles to enhance student engagement. In addition, course evaluations and other documents will be reviewed, and data related to the domains of assessment, literacies and tutoring will be extracted and compared to the recommended guidelines and principles derived from the systematic review. These principles will then be used to inform interventions that are then implemented in the CHS faculty.

Following implementation of the interventions, Phase 3 will consist of focus group discussions with lecturers and the students who were involved in the project, especially those in areas of assessment, literacy and tutoring. A second South African Survey of Student Engagement (SASSE) and Lecturer Survey of Student Engagement (LSSE) will be conducted at the end of 2016 in order to determine if there has been a change in student engagement. By the end of Phase 3 of the project, a range of interventions within the domains of assessment, literacies and tutoring would have been implemented and evaluated. Ethics clearance will be sought from the University of the Western Cape Senate Research Committee, as well as permission from the Registrar and the various Heads of Department in the Faculty.

Objective Structured Clinical Exams

This is the first draft of the next piece of content that I’ll be publishing in my Clinical Teacher app.

Abstract

The Objective Structured Clinical Examination was introduced as an assessment method that aimed to address some of the challenges that arose with the assessment of students’ competence in clinical skills. In a traditional clinical examination there are several interacting variables that can influence the outcome, including the student, the patient, and the examiner. In the structured clinical examination, two of the variables – the patient and the examiner – are more controlled, allowing for a more objective assessment of the student’s performance.

The OSCE is a performance-based assessment that can be used in both formative and summative situations. It is a versatile multipurpose tool that can be used to evaluate healthcare students in the clinical context, used to assess competency based on objective testing through direct observation. As an assessment method it is precise, objective, and reproducible which means that it allows consistent testing of students for a wide range of clinical skills. Unlike the traditional clinical exam, the OSCE could evaluate areas most critical to the performance of healthcare professionals such as communication skills and the ability to handle unpredictable patient behaviour. However, the OSCE is not inherently without fault and is only as good as the team implementing it. Care should be taken not to assume that the method is in itself valid, reliable or objective. In addition, the OSCE cannot be used as a measure of all things important in medical education and should be used in conjunction with other assessment tasks.

 

Introduction and background

The OSCE was developed in an attempt to address some of the challenges with the assessment of clinical competence that were prevalent at the time (Harden, Stevenson, Wilson-Downie & Wilson, 1975). These included problems with validity, reliability, objectivity and feasibility. In the standard clinical assessment at the time, the student’s performance was assessed by two examiners who observed them with a several patients. However, the patient and examiner selection meant that chance played too dominant a role in the examination, leading to variations in the outcome (ibid.). Thus there was a need for a more objective and structured approach to clinical examination. The OSCE assesses competencies that are based on objective testing through direct observation. It consists of several stations in which candidates must perform a variety of clinical tasks within a specified time period against predetermined criteria (Zayyan, 2011).

The OSCE is a method of assessment that is well-suited to formative assessment. It is a form of performance-based assessment, which means that a student must demonstrate the ability to perform a task under the direct observation of an examiner. Candidates get examined on predetermined criteria on the same or similar clinical scenario or tasks with marks written down against those criteria thus enabling recall, teaching audit and determination of standards.

 

Rationale for the OSCE

While the OSCE attempts to address issues of validity, reliability, objectivity and feasibility it should be noted that it cannot be all things to all people. It is practically impossible to have an assessment method that satisfies all the criteria of a good test in terms of validity and reliability. For example the OSCE cannot be used to measure students’ competence of characteristics like empathy, commitment to lifelong learning and care over time. These aspects of students’ competence should be assessed with other methods. Having said that, we should discuss the four important aspects of accurate assessment that inform the implementation of the OSCE (Barman, 2005).

Validity

Validity is a measure of how well an assessment task measures what it is supposed to measure, and may be regarded as the most important factor to be considered in an assessment. For a test of have a high level of validity, it must contain a representative sample of what students are expected to have achieved. For example, if the outcome of the assessment task is to say that the student is competent in performing a procedure, then the test must actually measure the student’s ability to perform the procedure. In addition, the OSCE tests a range of skills in isolation, which does not necessarily indicate their ability to perform the separate tasks as an integrated whole.

Reliability

Reliability is a measure of the stability of the test results over time and across sample items. In the OSCE reliability may be low if there are few stations and short timeframes. Other factors that influence reliability include unreliable “standardised” patients, personal scoring systems, patients, examiners and students who are fatigued, and noisy or disruptive assessment environments. The best way to improve the reliability of an OSCE is to have a high number of stations and to combine the outcomes with other methods of assessment.

Objectivity

The objectivity of the OSCE relies on the standardisation of the stations and the checklist method of scoring student performance, which theoretically means that every student will be assessed on the same task in the same way. However, there is evidence that inter-rater reliability can be low on the OSCE as well, meaning that there is still a bias present in the method. In order to reduce the effect of this bias, the OSCE should include more stations.

Feasibility

In the process of making the decision about whether or not to use the OSCE as an assessment method i.e. whether or not it is feasible, there are a number of factors to be considered. These include the number of students to be assessed, the number of examiners available, the physical space available for running the exam, and the associated cost of these factors. It is important to note that the OSCE is more time-consuming and more expensive in terms of human and material cost than other assessment methods, for example the structured oral examination. In addition, the time required for setting up the examination is greater than that needed in traditional assessment methods, which must be taken into account when making decisions about whether or not to use the OSCE.

 

Advantages of the OSCE format

The OSCE format allows for the direct observation of a student’s ability to engage with clinical ethics skills during a patient interaction. In addition, the OSCE can be used effectively to evaluate students’ communication skills, especially if standardised instruments for assessing this skills are used. In addition, it (Shumway & Harden, 2003; Chan, 2009):

  • Provides a uniform marking scheme for examiners and consistent examination scenarios for students, including pressure from patients.
  • Generates formative feedback for both the learners and the curriculum, whereby feedback that is gathered can improve students’ competency and enhance the quality of the learning experience.
  • Allows for more students to be be examined at any one time. For example, when a student is carrying out a procedure, another student who has already completed that stage may be answering the question at another station.
  • Provides for a more controlled setting because only two variables exist: the patient and the examiner.
  • Provides more insights about students’ clinical and interactive competencies.
  • Can be used to objectively assess other aspects of clinical expertise, such as physical examination skills, interpersonal skills, technical skills, problem-solving abilities, decision-making abilities, and patient treatment skills.
  • Student participation in an OSCE has a positive impact on learning because the students’ attention is focused on the acquisition of clinical skills that are directly relevant to clinical performance.

 

Preparation for an OSCE

The first thing to do when considering developing an OSCE is to ask what it is to be assessed. It is important to realise that OSCEs are not appropriate for assessing all aspects of competence. For example, knowledge is best assessed with a written exam.

The venue where the OSCE is going to take place must be carefully considered, especially if it needs to be booked in advance. If there are large numbers of students, it may be worthwhile to have multiple tracks running in different venues. The advantages are that there will be less noise and fewer distractions. If space is not an issue, having separate rooms for each station is preferable, although multiple stations in a single room with partitions is also reasonable. If you will have real patient assisting, note that you will need rooms for them to rest in (Bouriscot, 2005).

Be aware that you will need to contact and confirm external examiners well in advance of running the OSCE. Clinicians are busy and will needs lots of advance warning. It may be useful to provide a grid of dates and times that are available to give examiners the option of choosing sessions that are most suitable for them (ibid.).

One of the key factors in the success of using the OSCE for assessment is the use of either real or standardised patients. This is a component that adds confidence to the reliability of the outcomes. Standardised patients are the next best thing to working with live patients. They are usually volunteers or actors who are trained in the role playing of different psychological and physiological aspects of patients. Finding and training standardised patients is a significant aspect of preparing for an OSCE (Dent & Hardent, 2005).

If equipment is required, ensure that there are lists available at every station, highlighting what equipment should be present in order for the student to successfully complete the station. You should go through each station with the list the day before the OSCE to ensure that all equipment is present (Bouriscot, 2005).

Mark sheets to be used for the OSCE must be developed in advance. Each examiner at each station must be provided with an appropriate number of mark sheets for the students, including an estimation of spoilage. If there are going to be large numbers of students, it may be worthwhile developing mark sheets that can be electronically scanned. If results are to be manually entered, someone will need to ensure that they have been captured correctly (Bouriscot, 2005).

 

Developing scenarios for each station

The number of stations in an examination is dependent on a number of factors, including the number of students to be assessed, the range of skills and content areas to be covered, the time allocated to each station, the total time available for the examination and the facilities available to conduct the examination (Harden & Cairncross, 1980). Preparing the content for each station should begin well in advance so that others can review the stations and perhaps even complete a practice run before the event. It may happen that a scenario is good in theory but that logistical complications make it unrealistic to run in practice.

The following points are important to note when developing stations (Bouriscot, 2005):

  • Instructions to students must be clear so that they know exactly what is expected of them at each station
  • Similarly, instructions to examiners must also make it clear what is expected of them
  • The equipment required at each station should be identified
  • Marking schedule that identifies the important aspects of the skill being assessed
  • The duration of the station

Stations should be numbered so that there is less confusion for students who are moving between them, and also for examiners who will be marking at particular stations. Note that it is recommended to have one rest station for every 40 minutes of assessment (Bouriscot, 2005). Arrows, either on the floor or wall will help candidates move between stations and avoid any confusion about rotation.

While stations may be set up in any number of ways, one suggested format is for the student to rotate through two “types” of stations; a procedure station and a question station (Harden, Stevenson, Wilson-Downie & Wilson, 1975). There are two advantages to this approach. In the first place it reduces the effect of cueing, whereby the question that the student must answer is presented at the same time as the instruction for performing the procedure. The nature of the question may prompt the student towards the correct procedure. By using two stations, the candidate is presented with a problem to solve or an examination to be carried out without the questions that come later. When the student gets to the “question” station, they are then unable to go back to the previous station to change their response. Thus the questions do not provide a prompt for the examination. The second advantage of the station approach is that more students can be examined at any one time. While one student is performing, another student who has already completed that stage is answering the questions (ibid.).

 

Running an OSCE

It may be useful, if the venue is large, to have a map of the facility set up, including the location of specific stations. This can help determine early on which stations will be set up in which rooms, as well as determining the order of the exam. The number of available rooms will determine how many stations are possible, as well as how many tracks can be run simultaneously (and therefore how many times each track will need to be run). You will also need a space for subsequent groups of students to be sequestered while previous round of students are finishing. If the exam is going to continue for a long time, you may need an additional room for examiners and patients to rest and eat.

Students should be informed in advance how they will proceed from one station to another. For example, will one bell be used to signal the end of one station and the beginning of another. If the OSCE is formative in nature, or a practice round, will different buzzers be used to signal a period of feedback from the examiner? When the bell signalling the end of the station sounds, candidates usually have 1 minute to move to the next station and read the instructions before entering.

On the day of the exam, time should be allocated for registering students, directing them to stations, setting the time, indicating station changes (buzzers, bells, etc.), and assisting with both setting up final changes and dismantling stations. Each station must have the station number and instructions posted at the entrance, and standardised patients, examiners and candidates matched to the appropriate stations. Examines and patients should be set up at their stations sufficiently in advance of the starting time in order to review the checklists and prepare themselves adequately. It may be possible to have a dry run of the station in order to help the patient get into the role.

It is possible to use paper checklists or to capture the marks with handheld devices like iPads or smartphones (see Software later). The benefits of using digital capturing methods as opposed to paper checklists is that the data is already captured at the end of the examination, and feedback to students and the organisers can be provided more efficiently. If paper checklists are used, they must be collected at the end of the day and data captured manually.

Some of the common challenges that are experienced during the running of the OSCE include (Bouriscot, 2005):

  • Examiners not turning up – send reminders the week before and have reserves on standby
  • Standardised patients not turning up – have reserves on standby
  • Patients not turning up – remind them the day before, provide transport, plan for more patients than are needed
  • Patient discomfort with the temperature – ensure that the venue is warmed up or cooled down before the OSCE begins
  • Incorrect / missing equipment – check the equipment the day before, have spares available in case of equipment malfunction, batteries dying, etc.
  • Patients getting ill – have medical staff on hand
  • Student getting ill – take them somewhere nearby to lie down and recover

The above list demonstrates the range of complications that can arise during an OSCE. You should expect that things will go wrong and try and anticipate them. However, you should also be aware that there will always be room for improvement, which is why attention must be paid to evaluating the process. It is essential that the process be continually refined and improved based on student and staff feedback (Frantz, et al., 2013).

 

Marking of the OSCE

The marking scheme for the OSCE is intentional and objectively designed. It must be concise, well-focused and unambiguous, with the aim of discrimination between good and poor student performance. The marking scheme must therefore be cognisant of many possible choices and provide scores that are appropriate to each student performance (Zayyan, 2011).

The allocation of marks between the different parts of the examination should be determined in advance and will vary with, among other things, the seniority of the students. Thus, with junior students there will be more emphasis their technique and fewer marks will be awarded for the findings of their interpretation (Harden, Stevenson, Wilson Downie, Wilson, 1975).

The following example marking rubric for OSCE stations is taken from Chan (2009):

 

Excellent Proficient Average Poor
Diagnosis Able to give an excellent analysis and understanding on the patients’ problems and situations and applied medical knowledge to the clinical practice and determined the appropriate treatment. Able to demonstrate medical knowledge with a satisfactory analysis on the patients’ problems, and determined the appropriate treatment. Showed a basic analysis and knowledge on the patients’ problems, still provided the appropriate treatment. Only able to show minimal level of analysis and knowledge on the patients’ problems, unable to provide the appropriate treatment.
Problem-solving skills Able to manage the time to suggest and bring out appropriate solutions to problems; more than one solutions were provided; logical approach to seek for solutions was observed. Able to manage the time to bring out only one solution; logical flow was still observed but there was a lack of relevance of the flow. Still able to bring out one solution on time; logical flow was hardly observed. Failed to bring out any solution in specific time; logical flow was not observed.
Communication and interaction Able to get detail information needed for diagnosis; gave very clear and detail explanation and answers to patients; paid attention to patients’ responses and words. Able to get detail information needed for diagnosis; gave clear explanation and answers to patients; attempted but only paid some attention to patients’ responses and words. Only able to get basic information needed for diagnosis; attempted to give a clear explanation to patients but omitted some points; did not pay attention to patients’ responses and words. Failed to get information for diagnosis; gave ambiguous explanation to patients.
Clinical skills Perfectly performed the appropriate clinical procedures for every clinical tasks with no omission; no unnecessary procedure was done. Performed the required clinical procedures satisfactorily; committed a few minor mistakes or unnecessary procedure which did not affect the overall completion of the procedure. Performed the clinical procedures at an acceptable standard; committed some mistakes and some unnecessary procedures were done. Failed to carry out the necessary clinical procedures; committed lots of mistakes and misconception about operating clinical apparatus.

 

Common mistakes made by students during the OSCE

It may be helpful to guide students before the examination by helping them to understand what the OSCE is not (Medical Council of Canada, n.d.).

  • Not reading the instructions carefully – The student must elicit from the “patient” only the precise information that the question requires. Any additional or irrelevant information provided must not receive a mark.
  • Asking too many questions – Avoid asking too many questions, especially if the questions are disorganised and erratic, and seem aimed at hopefully stumbling across the few appropriate questions that are relevant to the task. The short period of time is designed to test candidates ability to elicit the most appropriate information from the patient.
  • Misinterpreting the instructions – This happens when candidates try to determine what the station is trying to test, rather than working through a clinically appropriate approach to the patient’s presenting complaint.
  • Using too many directed questions – Open-ended questions are helpful in this regard as they give the patient the opportunity to share more detailed information, while still leaving space for you to follow up with more directed questions.
  • Not listening to patients – Patients often report that candidates did not listen appropriately and therefore missed important information that was provided during the interview. In the case of using standardised patients, they may be trained to respond to an apparently indifferent candidate by withdrawing and providing less information.
  • Not explaining what you are doing in physical examination stations – The candidates may not explain what they are doing during the examination, leaving the examiner guessing as to what was intended, or whether the candidate observed a particular finding. By explaining what you see, hear and intend doing, you provide the examiner with context that helps them in scoring you appropriately.
  • Not providing enough direction in management stations – At stations that aim to assess the candidate’s management skills, they should provide clear instructions that will help you to improve their performance.
  • Missing the urgency of a patient problem – When the station is designed to assess clinical priorities, work through the priorities first and then come back later for additional information if this was not elicited earlier.
  • Talking too much – The time that the candidate spends with their patient should be used effectively in order to obtain the most relevant information. Candidates should avoid showing off with their vast knowledge base. Speak to the patient with courtesy and respect, eliciting relevant information.
  • Giving generic information – The candidate should avoid giving generic information that is of little value to the patient when it comes to making an informed decision.

 

Challenges with the OSCE

While the OSCE has many positive aspects, it should be noted that there are also many challenges when it comes to setting up and running them. The main critique against the OSCE is that it is very resource intensive but there are other disadvantages that include (Barman, 2005; Chan, 2009):

  • Requiring a lot of organisation. However, an argument can also be made that the increased preparation time occurs before the exam and allows for an examiners time to be used more efficiently.
  • Being expensive in terms of manpower, resources and time.
  • Discouraging students from looking at the patient as a whole.
  • Examining a narrow range of knowledge and skills and does not test for history-taking competency properly. Students only examine a number of different patients in isolation at each station instead of comprehensively examining a single patient.
  • Manual scoring of OSCE stations is time-consuming and increases the probability of mistakes.
  • It is nearly impossible to have children as standardised patients or patients with similar physical findings.

In addition, while being able to take a comprehensive history is an essential clinical skill, the time constraints necessary in an OSCE preclude this from being assessed. Similarly, because students’ skills are assessed in sections, it is difficult to make decisions regarding students’ ability to assess and manage patients holistically (Barman, 2005). Even if one were able to construct stations that assessed all aspects of clinical skills, it would only test those aspects in isolation rather than comprehensively integrating them all into a single demonstration. Linked to that, the OSCE also has a potentially negative impact on students’ learning because it contains multiple stations that sample isolated aspects of clinical medicine. The student may therefore prepare for the examination by compartmentalising the skills and not completely understanding the connection between them (Shumway & Harden, 2003). There also seems to be some evidence that while the OSCE is an appropriate method of assessment in undergraduate medical education, it is less well-suited for assessing the in-depth knowledge and skills of postgraduate students (Patil, 1993).

Challenges with reliability in the clinical examination may arise from the fact that different students are assessed on different patients and one may come across a temperamental patient who may help some students while obstructing others. In addition, test scores may not reflect students’ actual ability as repetitive demands may fatigue the student, patient or examiner. Students’ fatigue due to lengthy OSCEs may may affect their performance. Moreover, some students affect experience greater tension before and during examinations, as compared to other assessment methods. In spite of efforts to control patient and examiner variability, inaccuracies in judgment due to these effects remain. (Barman, 2005).

 

Software for managing an OSCE

There is an increasing range of software that assists with setting up and running an OSCE. These services often run a on a variety of mobile devices, offering portability and ease of use for examiners. One of the primary benefits of the using digital, instead of paper, scoring sheets is that the results are instantly available for analysis and for reporting to students. Examples of some of the available software include OSCE Online, OSCE Manager and eOSCE.

Selection_002

Ten OSCE pearls

The following list is taken from Dent & Harden (2005), and includes lessons learned from practical experiences of running OSCEs.

  1. Make all stations the same length, since rotating students through the stations means that you can’t have some students finishing before others.
  2. Linked stations require preparation. For example, if station 2 requires the student to follow up on what was done at station 1, then no student can begin at station 2. This means that a staggered start is required. In this case, one student would begin the exam before everyone else. Then, when the main exam begins, the student at station 1 will move to station 2. This student will finish one station before everyone else.
  3. Prepare additional standardised patients, and have additional examiners available to allow for unpredictable events detaining either one.
  4. Have backup equipment in case any of the exam equipment fails.
  5. Have staff available during the examination to maintain security and help students move between stations, especially those who are nervous at the beginning.
  6. If there is a missing student, move a sign labelled “missing student” to each station as the exam progresses. This will help avoid confusion when other students move into the unoccupied station by mistake.
  7. Remind students to remain in the exam room until the buzzer signals the end of the station, even if they have completed their task. This avoids having students standing around in the areas between rooms.
  8. Maintain exam security, especially when running the exam multiple times in series. Ensure that the first group of students are kept away from the second group.
  9. Make sure that the person keeping time and sounding the buzzer is well-prepared, as they have the potential to cause serious confusion among examiners and students. In addition, ensure that the buzzer can be heard throughout the exam venue.
  10. If the rotation has been compromised and people are confused, stop the exam before trying to sort out the problem. If a student has somehow missed a station, rather allow them the opportunity to return at the end and complete it then.

 

Take home points

  • The OSCE aims to improve the validity, reliability, objectivity and feasibility of assessing clinical competence in undergraduate medical students
  • The method is not without it’s challenges, which include the fact that it is resource intensive and therefore expensive
  • Factors which can play a role in reducing confidence in the test results include student, examiner and patient fatigue.
  • The best way to limit the influence of factors that negatively impact on the OSCE is to have a high number of stations.
  • Being well-prepared for the examination is the best way to ensure that it runs without problems. However, even when you are well-prepared, expect their to be challenges.
  • The following suggestions are presented to ensure a well-run OSCE:
    • Set an exam blueprint
    • Develop the station cases with checklists and rating scales
    • Recruit and train examiners
    • Recruit and train standardised patients
    • Plan space and equipment needs
    • Identify budgetary requirements
    • Prepare for last-minute emergencies

 

Conclusion

The use of the OSCE format for clinical examination has been shown to demonstrate improvements in reliability and validity of the assessment, allowing examiners to say with more confidence that students are proficient in the competencies that are tested. While OSCEs are considered to be more fair than other types of practical assessment, they do require significant investment in terms of finance, time and effort. However, these disadvantages are offset by the improvement in objectivity that emerge as a result of the approach.

 

Bibliography

Accepting student work as a gift

Selection_001A few months ago we invited a colleague from the institution to give a short presentation in my department, sharing some of her ideas around research. At some point in the session, she said “I offer this to you, because…”. I forget the rest of the sentence but what was striking to me was how it had begun. It really resonated with something I’d read earlier this year, from Ronald Barnett’s book “A will to learn: Being a student in an age of uncertainty“. From Barnett:

Here are gifts given without any hope of even a ‘thank-you’, yet this ‘gift-giving’ looks for some kind of return. The feedback may come late; the marks may not be as hoped, but the expectation of some return is carried in these gifts. The student’s offerings are gifts and deserve to be recognized as such, despite their hoped-for return.

….

The language that I have in mind is one of proffering, of tendering, of offering, of sharing, and of presenting and gifting. The pedagogical relationship may surely be understood in just these terms, as a setting of gift-giving that at least opens a space for mutual obligations attendant upon gift-giving.

….

In the pedagogical setting, the student engages in activities circumscribed by a curriculum. Those activities are implicitly judged to be worthwhile, for the curriculum has characteristically been formally sanctioned (typically through a university’s internal course validation procedures). However, those curricula activities are not just worthwhile in themselves for they are normally intended to lead somewhere. In that leading somewhere, there is something that emerges, whether it be the result of a laboratory experiment, a problem that has been solved, an essay that has been submitted or a design that has been created. These are pedagogical offerings.

….

Both the teacher and the taught put themselves forward, offer themselves, give themselves. They even, to some extent, exchange themselves.

I think that there is something incredibly powerful that happens when we begin to think about the work that the student submits (offers) as a gift. Something that they have given of themselves, a representation of the time, effort and thought they have put into a creative work. If we think about the student’s offering as a gift, surely it must change the way it is treated and the way we respond? How does feedback and assessment change if we think of them as responses to gifts? Or, as gifts themselves? Would our relationships with students change (be enhanced?) if we thought of their submissions and our feedback as mutual gifts, offered to each other as representations of who we are?

“Eleven hundred hours” – Poem by a student

For the past few years I’ve been asking my final year students to develop a learning portfolio as part of the ethics module I teach. Even though I encourage them to use different forms of knowledge representation, few of them take up the offer. However, every now and again someone submits something very different to the 2 page narrative. The student has given me permission to share her work here.

Its 11. She normally comes at 11.
I hope she forgets today.
She doesn’t care how I feel.
I’m always so tired.
The medication makes me drowsy.
The lines across her face I cannot even discern, my eyesight is failing.
My legs are weak.
I cannot feel my big toe.
She uses a toothpick, I cannot feel it, yet I know it hurts.
I have HIV, I know that.
Some days I cry
She doesn’t know
I’m not sure if I can trust her
I tell her all I want to do is sleep
She talks about exercise
I haven’t exercised a day in my life
My life is about surviving
Surviving the streets of Hanover Park
Protecting my family
Selling myself to support my family
She doesn’t know…
Its 11. She always comes at 11…

Its 11! The hour I despise.
Ms X is next on my patient list.
I wish she would open up.
I talk and talk and nothing gets through to her.
She’s demotivated and I’ve used all my weapons in my arsenal to help her
But its null en void.
I wish I could help her, but she needs to let me in.
Her body language pushes me away,
Never looking directly at me,
But help her I must.
And try and try again I will.
She thinks I don’t understand.
She thinks I cannot see the pain and suffering.
A hard woman is she.
Burdened. Troubled. Scourged.
Her barriers I need to break down, if only she lets her guard down.
I hope in vain that tomorrow will be a better day.
It’s 11! The hour I despise.

Understanding vs knowing

Final exams vs. projects – nope, false dichotomy: a practical start to the blog year (by Grant Wiggins)

Students who know can:

  • Recall facts
  • Repeat what they’ve been told
  • Perform skills as practiced
  • Plug in missing information
  • Recognize or identify something that they’ve been shown before

Whereas students who understand can:

  • Justify a claim
  • Connect discrete facts on their own
  • Apply their learning in new contexts
  • Adapt to new circumstances, purposes or audiences
  • Criticize arguments made by others
  • Explain how and why something is the case

IF understanding is our aim, THEN the majority of the assessments (or the weighting of questions in one big assessment) must reflect one or more of the phrases above.

In the Applied Physiotherapy module that we teach using a case-based learning approach, we’re trying to structure our feedback to students in terms that help them to construct their work in ways that explicitly address the items listed above. We use Google Drive to give feedback to students as they develop their own notes, and try to ensure that the students are expressing their understanding by creating relationships between concepts.

One of the major challenges has been to shift mindsets (both students’ and facilitators’) away from the idea that knowing facts is the same as understanding. As much as we try to emphasise that one can know many facts and still not understand, it’s still clear that this distinction does not come easily to everyone. Both students and some colleagues believe that knowing as many facts as possible is the key to being a strong practitioner, even though the evidence shows that decontextualised knowledge is not helpful in practice situations.

The list above, describing what students understanding “looks like”, is helpful in getting our facilitators and students who struggle with the shift in thinking, to better grasp what we’re aiming for.

Assessing Clinical Competence with the Mini-CEX

This is the first draft of an article that I published in The Clinical Teacher mobile app.

Introduction

The assessment of clinical competence is an essential component of clinical education but is challenging because of the range of factors that can influence the outcome. Clinical teachers must be able to make valid and reliable judgements of students’ clinical ability, but this is complex. The more valid and reliable a test is, the longer and more complicated it is to administer. The mini Clinical Evaluation Exercise, or mini-CEX was developed in response to some of the challenges of the traditional clinical evaluation exercise (CEX) and has been found to be a feasible, valid and reliable tool for the assessment of clinical competence.

Assessment of competence

Competence in clinical practice is defined as the “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served” (Epstein & Hundert, 2002). The assessment of competence can take a range of forms in clinical education, but this article will only discuss competence around the physical examination of patients.

Teaching physical examination skills is a unique challenge in clinical education because of the many variables that impact on how it is conducted. Consider how each of the following factors plays a role in the quality of teaching and learning that happens; the teachers’ own clinical skills; trainees’ prior knowledge, skills and interest; availability of patients with the necessary findings; patient willingness to be examined by a group of doctors and trainees who may not have any impact on their clinical care; the physical environment which is usually less than comfortable; and trainee fatigue level. In addition, the session should be relevant to the student and have significant educational value, otherwise there is the risk that it will degenerate into a “show and tell” exercise (Ramani, 2008).

This article will demonstrate how the mini-CEX provides a structured way to achieve the following goals of clinical assessment (Epstein, 2007):

  • Optimise the capabilities of all learners and practitioners by providing motivation and direction for future learning
  • Protect the public by identifying incompetent physicians
  • Provide a basis for choosing applicants for advanced training

The mini-Clinical Evaluation Exercise

The mini-CEX is a method of assessing the clinical competence of students in an authentic clinical setting, while at the same time providing a structured means of giving feedback to improve performance. It involves the direct observation of a focused clinical encounter between a student and patient, followed immediately with structured feedback designed to improve practice. It was developed in response to the shortcomings of both the traditional bedside oral examination and initial clinical evaluation exercise (CEX) (Norcini, 2005).

In the mini-CEX, the student conducts a subjective and objective assessment of a patient, focusing on one aspect of the patients presentation, and finishing with a diagnosis and treatment plan. The clinician scores the students’ performance on a range of criteria using the structured form, and provides the student with feedback on their strengths and weaknesses. The clinician highlights an area that the student can improve on, and together they agree on an action the student can take that will help them in their development. This can include a case presentation at a later date, a written exercise that demonstrates clinical reasoning, or a literature search (Epstein, 2007).

The session is relatively short (about 15 minutes) and should be incorporated into the normal routine of training. Ideally, the student should be assessed in multiple clinical contexts by multiple clinicians, although it is up to the student to identify when and with whom they would like to be assessed (Norcini, 2005). Students should be observed at least four times by different assessors to get a reliable assessment of competence (Norcini & Burch, 2007). The mini-CEX is a feasible, valid and reliable assessment tool with high fidelity for the evaluation of clinical competence (Nair, et al., 2008).

The mini-CEX is a good example of a workplace-based assessment method that fulfils three requirements for facilitating learning (Norcini & Burch, 2007):

  1. The course content, expected competencies and assessment practices are aligned
  2. Feedback is provided either during or immediately after the assessment
  3. The assessment is used to direct learning towards desired outcomes

Structure of a mini-CEX form

Each of the competences in Table 1 below is assessed on 9-point scale where 1-3 are “unsatisfactory”, 4 is “marginal”, 5-6 are “satisfactory”, and 7-9 are “superior” (Norcini, et al., 2005). In addition to the competences documented below, there is also space for both student and assessor to record their experience of the assessment, indicating their satisfaction with the process, time taken for the encounter and experience of the assessor.

Table 1: Competencies and descriptors of the mini-CEX form

Competence

Descriptor of a Satisfactory Trainee

History taking

Facilitates patient’s telling of story, effectively uses appropriate questions to obtain accurate, adequate information, responds appropriately to verbal and non-verbal cues.

Physical exam

Follows efficient, logical sequence; examination appropriate to clinical problem, explains to patient; sensitive to patient’s comfort, modesty.

Professionalism

Shows respect, compassion, empathy, establishes trust; Attends to patient’s needs of comfort, respect, confidentiality. Behaves in an ethical manner, awareness of relevant legal frameworks. Aware of limitations.

Clinical judgement

Makes appropriate diagnosis and formulates a suitable management plan. Selectively orders/ performs appropriate diagnostic studies, considers risks, benefits.

Communication skill

Explores patient’s perspective, jargon free, open and honest, empathetic, agrees management plan/therapy with patient.

Organisation/efficiency

Prioritises; is timely. Succinct. Summarises.

Overall clinical care

Demonstrates satisfactory clinical judgment, synthesis, caring, effectiveness. Efficiency, appropriate use of resources, balances risks and benefits, awareness of own limitations.

Role of the assessor

The assessor does not need to have prior knowledge or experience with assessing the student, but should have some experience in the domain of expertise that the assessment is relevant for. The patient must be made aware that the mini-CEX is going to used to assess a student’s level of competence with them, and they should give consent for this to happen. It is important to note that the session should be led by the trainee, not the assessor (National Health Service, n.d.).

The assessor must also ensure that the patient and assessment task selected is an appropriate example of something that the student would reasonably be expected to be able to do. Remember that the mini-CEX is only an assessment of competence within a narrow scope of practice, and therefore only a focused task will be assessed. They should also record the complexity of the patient’s problem, as there is some evidence that assessors score students higher on cases of increased complexity (Norcini, 2005).

After the session has been completed, the assessor must give feedback to the student immediately, highlighting their strengths as well as areas in which they can improve. Together, clinician and student must agree on an educational action that the student can take in order to improve their practice. It is also recommended that assessors go on at least a basic workshop to be introduced to the mini-CEX. Informal discussion is likely to improve both the quality of the assessment and of the feedback to students (Norcini, 2005).

Advantages of the mini-CEX

The mini-CEX also has these other strengths:

  • It is used in the clinical context with real patients and clinician educators, as opposed to the Objective Structured Clinical Exam (OSCE), which uses standardised patients.
  • It can be used in a variety of clinical settings, including the hospital, outpatient clinic and trauma, and while it was designed to be administered in the medical field, it is equally useful for most health professionals. The broader range of clinical challenges improves the quality of the assessment and of the educational feedback that the student receives.
  • The assessment is carried out by a variety of clinicians, which improves the reliability and validity of the tool, but also provides a variety of educational feedback for the student. This is useful because clinicians will often have different ways of managing the same patient, and it helps for students to be aware of the fact that there is often no single “correct” way of managing a patient.
  • The assessment of competence is accompanied with real, practical suggestions for improvement. This improves the validity of the score given and provides constructive feedback that the student can use to improve their practice.
  • The process provides a complete and realistic clinical assessment, in that the student must gather and synthesise relevant information, identify the problem, develop a management plan and communicate the outcome.
  • It can be included in students’ portfolio as part of their collection of evidence of general competence
  • The mini-CEX encourages the student to focus on one aspect of the clinical presentation, allowing them to prioritise the diagnosis and management of the patient.

Challenges when using the mini-CEX

There is some evidence that assessor feedback in terms of developing a plan of action is often ignored, negating the educational component of the tool. In addition, many students often fail to reflect on the session and to provide any form of self-evaluation. It is therefore essential that faculty training is considered part of an integrated approach to improving students’ clinical competence, because the quality of the assessment is dependent on faculty skills in history and physical exam, demonstration, observation, assessment and feedback (Holmboe, et al., 2004a). Another point to be aware of when considering the use of the mini-CEX is that it doesn’t allow for the comprehensive assessment of a complete patient examination (Norcini, et al., 2003).

Practice points

  • The mini-CEX provides a structured format for the assessment of students’ clinical competence within a focused physical examination of a patient
  • It is a feasible, valid and reliable method of assessment when it is used by multiple assessors in multiple clinical contexts over a period of time
  • Completion of the focused physical examination should be followed immediately by the feedback session, which must include an activity that the student can engage in to improve their practice

Conclusion

The mini-CEX has been demonstrated to be a valid and reliable tool for the assessment of clinical competence. It should be administered by multiple assessors in multiple clinical contexts in order for it to achieve its maximum potential as a both an assessment and educational tool.

 

References and sources

Clinical reasoning: Identifying errors and correcting

Yesterday I attended a presentation on clinical reasoning by Professors Vanessa Burch (University of Cape Town) and Juanita Bezuidenhout (University of Stellenbosch). Here are the notes I took during the presentation.

  1. How does CR work?
  2. How do errors occur?
  3. Do clinician educators contribute to errors?
  4. Can we identify students with CR difficulties?
  5. Can we improve CR skills?

How does CR work?
Graphical representation of the clinical reasoning process by Charlin et al. (2012).

Graphical representation of CR, from Charlin et al. (2012).
Click on image to enlarge.

High level CR appears to be intuitive but is really pattern recognition that happens as a result of lots of experience.

Students don’t have the illness scripts (i.e. patterns to recognise clinical presentations / clinical knowledge organised for action) and so they spend more time in System 2 reasoning, rather than system 1 reasoning (see Charlin et al, 2012). Side note: for additional detail on how pattern recognition actually works, see Stephen Pinker’s book, “How the mind works“.

Are we mindful of the complex thinking processes that make up CR, and do we expect students to be operating at the same level? Do we explicitly tell students about the CR process or expect them to “absorb it”?

We can act on illness scripts without acknowledging that they exist. This is why awareness of our behaviour (i.e. metacognition or mindfulness / reflection in action) is so important. System 2 processes act as a balance to prevent acting on patterns that are similar but not the same. This could be the basis for CR errors. See below the process from Lucchiari & Pravettoni’s cognitive balanced model that describes a conceptual scheme of diagnostic decision making.

Click on image to enlarge.
Click on image to enlarge.

It is also important to be aware that belief systems (i.e. cognitive biases and heuristics) exist, and that they can influence behaviour / decision making, which may lead to CR errors (Lucchiari & Pravettoni, 2012). See image below.

Selection_007

Novice practitioners tend to miss subtle differences in clinical presentations. Students must articulate their reasoning processes so that you can help them to link the facts (i.e. the clinical information) to the diagnosis. If the student missed the conceptual relationship between variables, they are prone to making mistakes.

Audétat et al (2012) use Fishbein’s integrative model of behaviour (and associated belief systems) to explain why managing clinical reasoning difficulties is so challenging (see below).

Click on the image to enlarge.
Click on the image to enlarge.

There is a tendency, in the clinical context, to emphasise service delivery above all else, with educational needs taking a distant second place. In other words, increase the students’ case load with little thought given to how this may impact on their learning (or the actual management of the patient). The clinical environment is therefore almost always not a very good educational environment that is conducive to learning.

Clues to identify students with CR difficulties:

  • Often not aware that we’re in System 2, while students are in System 1 → talking past each other because we’re in different spaces.

Clues at at the bedside:

  • Limited semantic transformation of patient interview. Student unable to do anything with the information at hand.
  • No logical clustering of complaints. The student can’t categorise like information in a clinically logical way.
  • No order of priority attributed to complaints. Students can’t decide what the most important problem is.
  • Key information not obtained during patient interview. Student doesn’t think to ask important questions → non-existent or faulty illness scripts (non-existent illness scripts are less dangerous than poorly configured ones because it’s easier to correct).
  • Physical examination excessively thorough or cursory. Student unable to make reasonable progress through the case.
  • Too many investigations ordered.
  • Inability to interpret results of investigations. Student unable to articulate a reasoning process, or they reason incorrectly, when confronted with a different set of variables e.g. X-ray, rather than a patient.

Strong beliefs in incorrect illness scripts can make novices see things that aren’t there e.g. seeing pneumonia on an X-ray that is clear. Belief systems are powerful drivers for behaviour.

CR errors are often left “unfixed” because trying to do it in the clinical context is too time consuming. These should be addressed later.

Other ways to see CR errors:

  • Discharge letters and case notes may be unstructured and lack clarity. Lack of illness scripts (or faulty ones) prevent students from linking concepts, which is evident in how they write narratives.
  • Too much / little time spent with the patient.
  • Emotional reaction to students: negative affect on the part of the patient (ask patients how they experienced the student’s management), or on the part of the clinician (there’s something about the student – that isn’t related to rudeness or some other inappropriate behaviour – that you find upsetting.

Can CR be taught?

Every clinician thinks differently.
There is no right or wrong way to think.
Diagnostic competence requires knowledge.

The challenge is to:

  • Organise accurate knowledge in a user-friendly way. This is about developing appropriate semantic networks / conceptual relationships.
  • Create rapid access routes to the knowledge. Create opportunities to access the semantic networks quickly.
  • Provide enough opportunities to use the pathways. Practice, practice, practice.

Avoid students thinking that they don’t know the diagnosis. Help them to move towards thinking or knowing the diagnosis.

The key to success is structured reflection. How do we get into their heads, and how do we show them what is in our heads?

Reflection must be structured because it doesn’t help for the student to keep thinking the wrong thing. It’s no good asking the student to “have another go” because they just gave it their best shot. When the student keeps guessing the wrong answer (or, even if they guess the right answer), it’s not useful.

How do we get students to “think again” (i.e. System 1 and 2 thinking) in a structured and explicit way?

  • Prioritise 3 possible diagnoses
  • Column 1: What fits the diagnosis (Yes)? This identifies if they have an illness script. Begin by removing the diagnoses that definitely don’t fit, so that they don’t continue with the faulty illness script.
  • Column 2: What doesn’t fit the diagnosis (No)?
  • Column 3: What do you still need to find out (Data needed)?

This process will  help students to articulate an illness script in a structured way. The steps require that you explicitly articulate your (i.e. the clinician’s) own thinking process. Students could also write a narrative explaining their reasoning process for the different columns.

Anxiety and loss of self-esteem will cause students to crash and be unable to take in anything that you say. You must first create an environment where they can take articulate their thinking process. It’s not about giving them the answers or the facts, it’s about taking them through a reasoning process.

We cannot help students think on a case by base basis. There are too many cases. We need to help them to work this out on their own.

References

  • Audétat, M.-C., Dory, V., Nendaz, M., Vanpee, D., Pestiaux, D., Junod Perron, N., & Charlin, B. (2012). What is so difficult about managing clinical reasoning difficulties? Medical education, 46(2), 216–27.
  • Lucchiari, C., & Pravettoni, G. (2012). Cognitive balanced model: a conceptual scheme of diagnostic decision making. Journal of evaluation in clinical practice, 18(1), 82–8.

PHT402: Final thoughts and moving forward

This is a short review post for the PHT402 Professional Ethics course that was recently completed by physiotherapy students from the University of the Western Cape and qualified physiotherapists who participated through Physiopedia. We believe that this is the first time that a completely open, online course in professional ethics has been run as part of a formal undergraduate health care curriculum.

2049233526_358678b16eIn total we had 52 UWC students and 36 external participants from around the world, including South Africa, USA, United Kingdom, India, New Zealand, Estonia, Saudi Arabia and Canada. The context of the course, objectives, course activities and participant learning portfolios are available on the project page, so I won’t go over those again other than to say that the course was aimed at developing in students a set of attributes that went beyond simply teaching them about concepts in professional ethics. In other words, it was about trying to change ways of thinking and being, as opposed to teaching content. It’s too early to say whether or not we achieved this but if nothing else, we do seem to have made a significant impact in the personal and professional lives of some of the participants.

One of the most interesting things about this course has been the enormous variety of perspectives that emerged, which on a personal level have driven my thinking and reasoning in different directions than if I had engaged with the topic in isolation. From one of the participants, “…it brings on thoughts that I find unsettling“. This is a good thing. One of the points of the course was to put people into those contested spaces where the “right” and “wrong” answers are ambiguous and context dependent. The more we explore those spaces within ourselves and with others, the better prepared we’ll be to navigate difficult ethical situations in our professional practice.

Running the PHT402 Professional Ethics course in this way has been an enormous learning experience for me and many lessons emerged during the course that were unanticipated. Here are some of the things we did that I’ve never done before and which challenged us to think about different ways of teaching and learning:

  • Participants were mostly unfamiliar with how the internet works and so had no experience with following the work of others. We needed to give very explicit instructions regarding setting up blogs and following other participants. Email support was extensive and many participants were regularly in contact. I learned that email is still an essential aspect of working digitally.
  • Participants were geographically distributed and most had never had any blogging experience. We needed to figure out how to teach them how to blog without being able to get them all into a classroom. We wanted not only to teach them how to simply write blog posts but to also include embedded media, linking to other participants, using tags and categories. We wrote a series of posts that were designed to not only give instructions on how to blog but also how to write engaging posts on the web. Every participant was encouraged to follow the participants to ensure that they were exposed to this input.
  • It wasn’t possible for the facilitators to comment on every post of every user (although I gave it my best shot) but we had to make sure that everyone got feedback of some kind on their posts. We designed a form in Google Forms and asked every participant to review the work of 3 other participants. Then we aggregated that feedback, which was quantitative and qualitative) and sent it to each participant. In this way, we ensured that everyone got feedback in one form, even if they weren’t getting comments on their posts.
  • It’s difficult to give a grade (this was part of a formal curriculum, so grades were unfortunately a necessity) for participants’ perceptions of topics like equality, morality and euthanasia. We decided the students would be graded on the extent to which they could demonstrate evidence of learning in their final posts. We said that this could be in the form of identifying personal conflict and resolution (one of the aims of the course), linking to the posts of others with analysis and integration of those alternative ideas (learning collaboratively), use of the platform features e.g. tagging, categories, Liking, Commenting, etc. (using technology to enable richer forms of communication). I created a rubric that is more extensive than this list, but it just goes to show that the assessment of a course like this needs to be about more than simply asking if the student covered the relevant content.

Now that this course has been completed, I plan to do research on the data that was generated. This was always part of the project and as such it had ethical clearance from my institutional review board from the outset:

  • I designed the learning environment using principles that I had developed as part of my PhD project. This course could be seen as a pilot study aimed at further testing those design principles as a way of developing a set of Graduate Attributes in an online learning space. To this end I’ll be doing a series of focus groups to find out from students whether or not the course objectives were achieved.
  • In addition to the focus groups I’d like to try and triangulate that data with a content analysis of the blog posts and comments that were generated during the course. I’ll qualitatively analyse the course outputs that were created by participants.
  • I’d like to survey all of the participants to get a general sense of their experiences and perceptions of having completed a course that was very different to what they were used to from a traditional curriculum. I’d like to find out if offering a course in this way is something that we should be looking at in more depth in our department.
  • During the course, a significant number of connections were made between people on the open web. I’d like to use social network analysis to see if there’s anything interesting that emerged as a result of how people connected with each other. If you have any suggestions for methods to analyse a set of blog posts on WordPress, please let me know.
  • Finally, I want to interview the other facilitators who helped me to develop the course and who were based in different countries at different times in the project. I want to see if there are any lessons that could be developed for other, geographically dispersed teachers who would like to run collaborative online courses.