reading research

#APaperADay – Conceptual frameworks to illuminate and magnify

Bordage, G. (2009). Conceptual frameworks to illuminate and magnify. Medical Education, 43(4), 312–319.

Conceptual frameworks represent ways of thinking about a problem or a study, or ways of representing how complex things work the way they do.

A nice position paper that emphasises the value of conceptual frameworks as a tool for thinking, not only more deeply about problems, but more broadly, through the use of multiple frameworks applied to different aspects of the problem. The author uses three examples to develop a set of 13 key points related to the use of conceptual frameworks in education and research. The article is useful for anyone interested in developing a deeper approach to project design and educational research.

Frameworks inform the way we think and the decisions we make. The same task – viewed through different frameworks – will likely have different ways of thinking associated with it.

Frameworks come from:

  • Theories that have been confirmed experimentally;
  • Models derived from theories or observations;
  • Evidence-based practices.

We can combine frameworks in order for our activities to be more holistic. Educational problems can be framed with multiple frameworks, each providing different points of view and leading to different conclusions/solutions.

Like a lighthouse that illuminates only certain sections of the complete field of view, conceptual frameworks also provide only partial views of reality. In other words, there is no “correct” or all-encompassing framework for any given problem. Using a framework only enables us to illuminate and magnify one aspect of a problem, necessarily leaving others in the dark. When we start working on a problem without identifying our frameworks and assumptions (can also be thought of as identifying our biases) we limit the range of possible solutions.

Authors of medical education studies tend not explicitly identify their biases and frameworks.

The author goes on to provide three examples of how conceptual frameworks can be used to frame various educational problems (2 in medical education projects, 1 in research). Each example is followed by key points (13 in total). In each of the examples, the author describes possible pathways through the problem in order to develop different solutions, each informed by different frameworks.

Key points (these points make more sense after working through the examples):

  1. Frameworks can help us to differentiate problems from symptoms by looking at the problem from broader, more comprehensive perspectives. They help us to understand the problem more deeply.
  2. Having an awareness of a variety of a conceptual frameworks makes it more likely that our possible solutions will be wide-ranging because the frameworks emphasise different aspects of the problem and potential solution.
  3. Because each framework is inherently limited, a variety of frameworks can provide more ways to identify the important variables and their interactions/relationships. It is likely that more than one framework is relevant to the situation.
  4. We can use different frameworks within the same problem to analyse different aspects of the problem e.g. one for the problem and one for the solution.
  5. Conceptual frameworks can come from theories, models or evidence-based practices.
  6. Scholars need to apply the principles outlined in the conceptual framework(s) selected.
  7. Conceptual frameworks help identify important variables and their potential relationships; this also means that some variables are disregarded.
  8. Conceptual frameworks are dynamic entities and benefit from being challenged and altered as needed.
  9. Conceptual frameworks allow scholars to build upon one another’s work and allow individuals to develop programmes of research. When researchers don’t use frameworks, there’s an increased chance that the “findings may be superficial and non-cumulative.”
  10. Programmatic, conceptually-based research helps accumulate deeper understanding over time and thus moves the field forward.
  11. Relevant conceptual frameworks can be found outside one’s specialty or field. Medical education scholars shouldn’t expect that all relevant frameworks can be found in the medical education literature.
  12. Considering competing conceptual frameworks can maximise your chances of selecting the most appropriate framework for your problem or situation while guarding against premature, inappropriate or sub-optimal choices.
  13. Scholars are responsible for making explicit in their publications the assumptions and principles contained in the conceptual framework(s) they use.

The third example seems (to me) to be an unnecessarily long diversion into the author’s own research. And while the first two examples are quite practical and relevant, the third is quite abstract, possibly because of the focus on educational research and study design. I wonder how many readers will find relevance in it.

In a research context, conceptual frameworks can help to both frame or formulate the initial questions, identify variables for analysis, and interpret results.

The conclusion of the paper is very nice summary of the main ideas. However, it also introduces some new ideas, which probably should have been included in the main text.

Conceptual frameworks provide different lenses for looking at, and thinking about, problems and conceptualising solutions. Using a variety of frameworks, we open ourselves up to different solutions and potentially avoid falling victim to our own assumptions and biases.

It’s important to remember that frameworks magnify and illuminate only certain aspects of each problem, leaving other aspects in the dark i.e. there is no single framework that does everything.

Novice educators and researchers may find it daunting to work with frameworks, especially when you consider that they may not be aware of the range of possible frameworks.

How do you choose one framework over another? It’s important to discuss your problem and potential solutions with more experienced colleagues and experts in the field. Remember however, that some experts may be experts partly because they’ve spent a long time committed to a framework/way of seeing the world, which may make it difficult for them to give you an unbiased perspective.

Reviewing the relevant literature also helps to identify what frameworks other educators have used in addressing similar problems. The specific question you’re asking is also an important means of identifying a relevant framework.

Note: I’m the Editor at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

AI clinical

Translating AI into the clinical setting at UC Irvine – AI Med

Ultimately, many of these shortcomings exist because few if any physicians are actively engaged in developing the next generation of technology, AI or otherwise. It is interesting to note the vast majority of medical startup companies are founded with limited if any physician involvement or oversight.Without experts that deeply understand both the medical and technical aspects of the problem, there is currently a significant gap in translating cutting-edge AI technology to healthcare.

Source: Translating AI into the clinical setting at UC Irvine – AI Med

I’m preparing an article on machine learning for clinicians and one of the recommendations I make is that we must ensure that the 21st century healthcare agenda is not driven by venture capital and software engineers. Even though private corporations and government are probably not malevolent, when surveillance and profit are your core concerns it’s unlikely that you’re going to develop something that truly works in the patients’ best interest. We really do need clinicians to be more involved in guiding the progression of AI integration in the clinical context.

See also: AMA passes first policy guidelines on augmented intelligence.

Note: If you’re interested in this topic, I’ve shared the first draft of my introduction to machine learning for clinicians on ResearchGate and would appreciate any feedback you may have.

assessment clinical education research

Emotions and assessment: considerations for rater‐based judgements of entrustment

We identify and discuss three different interpretations of the influence of raters’ emotions during assessments: (i) emotions lead to biased decision making; (ii) emotions contribute random noise to assessment, and (iii) emotions constitute legitimate sources of information that contribute to assessment decisions. We discuss these three interpretations in terms of areas for future research and implications for assessment.

Source: Gomez‐Garibello, C. and Young, M. (2018), Emotions and assessment: considerations for rater‐based judgements of entrustment. Med Educ, 52: 254-262. doi:10.1111/medu.13476

When are we going to stop thinking that assessment – of any kind – is objective? As soon as you’re making a decision (about what question to ask, the mode of response, the weighting of the item, etc.) you’re making a subjective choice about the signal you’re sending to students about what you value. If the student considers you to be a proxy of the profession/institution, then you’re subconsciously signalling the values of the profession/institution.

If you’re interested in the topic of subjectivity in assessment, you may be interested in two of our In Beta episodes:


Objective Structured Clinical Exams

This is the first draft of the next piece of content that I’ll be publishing in my Clinical Teacher app.


The Objective Structured Clinical Examination was introduced as an assessment method that aimed to address some of the challenges that arose with the assessment of students’ competence in clinical skills. In a traditional clinical examination there are several interacting variables that can influence the outcome, including the student, the patient, and the examiner. In the structured clinical examination, two of the variables – the patient and the examiner – are more controlled, allowing for a more objective assessment of the student’s performance.

The OSCE is a performance-based assessment that can be used in both formative and summative situations. It is a versatile multipurpose tool that can be used to evaluate healthcare students in the clinical context, used to assess competency based on objective testing through direct observation. As an assessment method it is precise, objective, and reproducible which means that it allows consistent testing of students for a wide range of clinical skills. Unlike the traditional clinical exam, the OSCE could evaluate areas most critical to the performance of healthcare professionals such as communication skills and the ability to handle unpredictable patient behaviour. However, the OSCE is not inherently without fault and is only as good as the team implementing it. Care should be taken not to assume that the method is in itself valid, reliable or objective. In addition, the OSCE cannot be used as a measure of all things important in medical education and should be used in conjunction with other assessment tasks.


Introduction and background

The OSCE was developed in an attempt to address some of the challenges with the assessment of clinical competence that were prevalent at the time (Harden, Stevenson, Wilson-Downie & Wilson, 1975). These included problems with validity, reliability, objectivity and feasibility. In the standard clinical assessment at the time, the student’s performance was assessed by two examiners who observed them with a several patients. However, the patient and examiner selection meant that chance played too dominant a role in the examination, leading to variations in the outcome (ibid.). Thus there was a need for a more objective and structured approach to clinical examination. The OSCE assesses competencies that are based on objective testing through direct observation. It consists of several stations in which candidates must perform a variety of clinical tasks within a specified time period against predetermined criteria (Zayyan, 2011).

The OSCE is a method of assessment that is well-suited to formative assessment. It is a form of performance-based assessment, which means that a student must demonstrate the ability to perform a task under the direct observation of an examiner. Candidates get examined on predetermined criteria on the same or similar clinical scenario or tasks with marks written down against those criteria thus enabling recall, teaching audit and determination of standards.


Rationale for the OSCE

While the OSCE attempts to address issues of validity, reliability, objectivity and feasibility it should be noted that it cannot be all things to all people. It is practically impossible to have an assessment method that satisfies all the criteria of a good test in terms of validity and reliability. For example the OSCE cannot be used to measure students’ competence of characteristics like empathy, commitment to lifelong learning and care over time. These aspects of students’ competence should be assessed with other methods. Having said that, we should discuss the four important aspects of accurate assessment that inform the implementation of the OSCE (Barman, 2005).


Validity is a measure of how well an assessment task measures what it is supposed to measure, and may be regarded as the most important factor to be considered in an assessment. For a test of have a high level of validity, it must contain a representative sample of what students are expected to have achieved. For example, if the outcome of the assessment task is to say that the student is competent in performing a procedure, then the test must actually measure the student’s ability to perform the procedure. In addition, the OSCE tests a range of skills in isolation, which does not necessarily indicate their ability to perform the separate tasks as an integrated whole.


Reliability is a measure of the stability of the test results over time and across sample items. In the OSCE reliability may be low if there are few stations and short timeframes. Other factors that influence reliability include unreliable “standardised” patients, personal scoring systems, patients, examiners and students who are fatigued, and noisy or disruptive assessment environments. The best way to improve the reliability of an OSCE is to have a high number of stations and to combine the outcomes with other methods of assessment.


The objectivity of the OSCE relies on the standardisation of the stations and the checklist method of scoring student performance, which theoretically means that every student will be assessed on the same task in the same way. However, there is evidence that inter-rater reliability can be low on the OSCE as well, meaning that there is still a bias present in the method. In order to reduce the effect of this bias, the OSCE should include more stations.


In the process of making the decision about whether or not to use the OSCE as an assessment method i.e. whether or not it is feasible, there are a number of factors to be considered. These include the number of students to be assessed, the number of examiners available, the physical space available for running the exam, and the associated cost of these factors. It is important to note that the OSCE is more time-consuming and more expensive in terms of human and material cost than other assessment methods, for example the structured oral examination. In addition, the time required for setting up the examination is greater than that needed in traditional assessment methods, which must be taken into account when making decisions about whether or not to use the OSCE.


Advantages of the OSCE format

The OSCE format allows for the direct observation of a student’s ability to engage with clinical ethics skills during a patient interaction. In addition, the OSCE can be used effectively to evaluate students’ communication skills, especially if standardised instruments for assessing this skills are used. In addition, it (Shumway & Harden, 2003; Chan, 2009):

  • Provides a uniform marking scheme for examiners and consistent examination scenarios for students, including pressure from patients.
  • Generates formative feedback for both the learners and the curriculum, whereby feedback that is gathered can improve students’ competency and enhance the quality of the learning experience.
  • Allows for more students to be be examined at any one time. For example, when a student is carrying out a procedure, another student who has already completed that stage may be answering the question at another station.
  • Provides for a more controlled setting because only two variables exist: the patient and the examiner.
  • Provides more insights about students’ clinical and interactive competencies.
  • Can be used to objectively assess other aspects of clinical expertise, such as physical examination skills, interpersonal skills, technical skills, problem-solving abilities, decision-making abilities, and patient treatment skills.
  • Student participation in an OSCE has a positive impact on learning because the students’ attention is focused on the acquisition of clinical skills that are directly relevant to clinical performance.


Preparation for an OSCE

The first thing to do when considering developing an OSCE is to ask what it is to be assessed. It is important to realise that OSCEs are not appropriate for assessing all aspects of competence. For example, knowledge is best assessed with a written exam.

The venue where the OSCE is going to take place must be carefully considered, especially if it needs to be booked in advance. If there are large numbers of students, it may be worthwhile to have multiple tracks running in different venues. The advantages are that there will be less noise and fewer distractions. If space is not an issue, having separate rooms for each station is preferable, although multiple stations in a single room with partitions is also reasonable. If you will have real patient assisting, note that you will need rooms for them to rest in (Bouriscot, 2005).

Be aware that you will need to contact and confirm external examiners well in advance of running the OSCE. Clinicians are busy and will needs lots of advance warning. It may be useful to provide a grid of dates and times that are available to give examiners the option of choosing sessions that are most suitable for them (ibid.).

One of the key factors in the success of using the OSCE for assessment is the use of either real or standardised patients. This is a component that adds confidence to the reliability of the outcomes. Standardised patients are the next best thing to working with live patients. They are usually volunteers or actors who are trained in the role playing of different psychological and physiological aspects of patients. Finding and training standardised patients is a significant aspect of preparing for an OSCE (Dent & Hardent, 2005).

If equipment is required, ensure that there are lists available at every station, highlighting what equipment should be present in order for the student to successfully complete the station. You should go through each station with the list the day before the OSCE to ensure that all equipment is present (Bouriscot, 2005).

Mark sheets to be used for the OSCE must be developed in advance. Each examiner at each station must be provided with an appropriate number of mark sheets for the students, including an estimation of spoilage. If there are going to be large numbers of students, it may be worthwhile developing mark sheets that can be electronically scanned. If results are to be manually entered, someone will need to ensure that they have been captured correctly (Bouriscot, 2005).


Developing scenarios for each station

The number of stations in an examination is dependent on a number of factors, including the number of students to be assessed, the range of skills and content areas to be covered, the time allocated to each station, the total time available for the examination and the facilities available to conduct the examination (Harden & Cairncross, 1980). Preparing the content for each station should begin well in advance so that others can review the stations and perhaps even complete a practice run before the event. It may happen that a scenario is good in theory but that logistical complications make it unrealistic to run in practice.

The following points are important to note when developing stations (Bouriscot, 2005):

  • Instructions to students must be clear so that they know exactly what is expected of them at each station
  • Similarly, instructions to examiners must also make it clear what is expected of them
  • The equipment required at each station should be identified
  • Marking schedule that identifies the important aspects of the skill being assessed
  • The duration of the station

Stations should be numbered so that there is less confusion for students who are moving between them, and also for examiners who will be marking at particular stations. Note that it is recommended to have one rest station for every 40 minutes of assessment (Bouriscot, 2005). Arrows, either on the floor or wall will help candidates move between stations and avoid any confusion about rotation.

While stations may be set up in any number of ways, one suggested format is for the student to rotate through two “types” of stations; a procedure station and a question station (Harden, Stevenson, Wilson-Downie & Wilson, 1975). There are two advantages to this approach. In the first place it reduces the effect of cueing, whereby the question that the student must answer is presented at the same time as the instruction for performing the procedure. The nature of the question may prompt the student towards the correct procedure. By using two stations, the candidate is presented with a problem to solve or an examination to be carried out without the questions that come later. When the student gets to the “question” station, they are then unable to go back to the previous station to change their response. Thus the questions do not provide a prompt for the examination. The second advantage of the station approach is that more students can be examined at any one time. While one student is performing, another student who has already completed that stage is answering the questions (ibid.).


Running an OSCE

It may be useful, if the venue is large, to have a map of the facility set up, including the location of specific stations. This can help determine early on which stations will be set up in which rooms, as well as determining the order of the exam. The number of available rooms will determine how many stations are possible, as well as how many tracks can be run simultaneously (and therefore how many times each track will need to be run). You will also need a space for subsequent groups of students to be sequestered while previous round of students are finishing. If the exam is going to continue for a long time, you may need an additional room for examiners and patients to rest and eat.

Students should be informed in advance how they will proceed from one station to another. For example, will one bell be used to signal the end of one station and the beginning of another. If the OSCE is formative in nature, or a practice round, will different buzzers be used to signal a period of feedback from the examiner? When the bell signalling the end of the station sounds, candidates usually have 1 minute to move to the next station and read the instructions before entering.

On the day of the exam, time should be allocated for registering students, directing them to stations, setting the time, indicating station changes (buzzers, bells, etc.), and assisting with both setting up final changes and dismantling stations. Each station must have the station number and instructions posted at the entrance, and standardised patients, examiners and candidates matched to the appropriate stations. Examines and patients should be set up at their stations sufficiently in advance of the starting time in order to review the checklists and prepare themselves adequately. It may be possible to have a dry run of the station in order to help the patient get into the role.

It is possible to use paper checklists or to capture the marks with handheld devices like iPads or smartphones (see Software later). The benefits of using digital capturing methods as opposed to paper checklists is that the data is already captured at the end of the examination, and feedback to students and the organisers can be provided more efficiently. If paper checklists are used, they must be collected at the end of the day and data captured manually.

Some of the common challenges that are experienced during the running of the OSCE include (Bouriscot, 2005):

  • Examiners not turning up – send reminders the week before and have reserves on standby
  • Standardised patients not turning up – have reserves on standby
  • Patients not turning up – remind them the day before, provide transport, plan for more patients than are needed
  • Patient discomfort with the temperature – ensure that the venue is warmed up or cooled down before the OSCE begins
  • Incorrect / missing equipment – check the equipment the day before, have spares available in case of equipment malfunction, batteries dying, etc.
  • Patients getting ill – have medical staff on hand
  • Student getting ill – take them somewhere nearby to lie down and recover

The above list demonstrates the range of complications that can arise during an OSCE. You should expect that things will go wrong and try and anticipate them. However, you should also be aware that there will always be room for improvement, which is why attention must be paid to evaluating the process. It is essential that the process be continually refined and improved based on student and staff feedback (Frantz, et al., 2013).


Marking of the OSCE

The marking scheme for the OSCE is intentional and objectively designed. It must be concise, well-focused and unambiguous, with the aim of discrimination between good and poor student performance. The marking scheme must therefore be cognisant of many possible choices and provide scores that are appropriate to each student performance (Zayyan, 2011).

The allocation of marks between the different parts of the examination should be determined in advance and will vary with, among other things, the seniority of the students. Thus, with junior students there will be more emphasis their technique and fewer marks will be awarded for the findings of their interpretation (Harden, Stevenson, Wilson Downie, Wilson, 1975).

The following example marking rubric for OSCE stations is taken from Chan (2009):


DiagnosisAble to give an excellent analysis and understanding on the patients’ problems and situations and applied medical knowledge to the clinical practice and determined the appropriate treatment.Able to demonstrate medical knowledge with a satisfactory analysis on the patients’ problems, and determined the appropriate treatment.Showed a basic analysis and knowledge on the patients’ problems, still provided the appropriate treatment.Only able to show minimal level of analysis and knowledge on the patients’ problems, unable to provide the appropriate treatment.
Problem-solving skillsAble to manage the time to suggest and bring out appropriate solutions to problems; more than one solutions were provided; logical approach to seek for solutions was observed.Able to manage the time to bring out only one solution; logical flow was still observed but there was a lack of relevance of the flow.Still able to bring out one solution on time; logical flow was hardly observed.Failed to bring out any solution in specific time; logical flow was not observed.
Communication and interactionAble to get detail information needed for diagnosis; gave very clear and detail explanation and answers to patients; paid attention to patients’ responses and words.Able to get detail information needed for diagnosis; gave clear explanation and answers to patients; attempted but only paid some attention to patients’ responses and words.Only able to get basic information needed for diagnosis; attempted to give a clear explanation to patients but omitted some points; did not pay attention to patients’ responses and words.Failed to get information for diagnosis; gave ambiguous explanation to patients.
Clinical skillsPerfectly performed the appropriate clinical procedures for every clinical tasks with no omission; no unnecessary procedure was done.Performed the required clinical procedures satisfactorily; committed a few minor mistakes or unnecessary procedure which did not affect the overall completion of the procedure.Performed the clinical procedures at an acceptable standard; committed some mistakes and some unnecessary procedures were done.Failed to carry out the necessary clinical procedures; committed lots of mistakes and misconception about operating clinical apparatus.


Common mistakes made by students during the OSCE

It may be helpful to guide students before the examination by helping them to understand what the OSCE is not (Medical Council of Canada, n.d.).

  • Not reading the instructions carefully – The student must elicit from the “patient” only the precise information that the question requires. Any additional or irrelevant information provided must not receive a mark.
  • Asking too many questions – Avoid asking too many questions, especially if the questions are disorganised and erratic, and seem aimed at hopefully stumbling across the few appropriate questions that are relevant to the task. The short period of time is designed to test candidates ability to elicit the most appropriate information from the patient.
  • Misinterpreting the instructions – This happens when candidates try to determine what the station is trying to test, rather than working through a clinically appropriate approach to the patient’s presenting complaint.
  • Using too many directed questions – Open-ended questions are helpful in this regard as they give the patient the opportunity to share more detailed information, while still leaving space for you to follow up with more directed questions.
  • Not listening to patients – Patients often report that candidates did not listen appropriately and therefore missed important information that was provided during the interview. In the case of using standardised patients, they may be trained to respond to an apparently indifferent candidate by withdrawing and providing less information.
  • Not explaining what you are doing in physical examination stations – The candidates may not explain what they are doing during the examination, leaving the examiner guessing as to what was intended, or whether the candidate observed a particular finding. By explaining what you see, hear and intend doing, you provide the examiner with context that helps them in scoring you appropriately.
  • Not providing enough direction in management stations – At stations that aim to assess the candidate’s management skills, they should provide clear instructions that will help you to improve their performance.
  • Missing the urgency of a patient problem – When the station is designed to assess clinical priorities, work through the priorities first and then come back later for additional information if this was not elicited earlier.
  • Talking too much – The time that the candidate spends with their patient should be used effectively in order to obtain the most relevant information. Candidates should avoid showing off with their vast knowledge base. Speak to the patient with courtesy and respect, eliciting relevant information.
  • Giving generic information – The candidate should avoid giving generic information that is of little value to the patient when it comes to making an informed decision.


Challenges with the OSCE

While the OSCE has many positive aspects, it should be noted that there are also many challenges when it comes to setting up and running them. The main critique against the OSCE is that it is very resource intensive but there are other disadvantages that include (Barman, 2005; Chan, 2009):

  • Requiring a lot of organisation. However, an argument can also be made that the increased preparation time occurs before the exam and allows for an examiners time to be used more efficiently.
  • Being expensive in terms of manpower, resources and time.
  • Discouraging students from looking at the patient as a whole.
  • Examining a narrow range of knowledge and skills and does not test for history-taking competency properly. Students only examine a number of different patients in isolation at each station instead of comprehensively examining a single patient.
  • Manual scoring of OSCE stations is time-consuming and increases the probability of mistakes.
  • It is nearly impossible to have children as standardised patients or patients with similar physical findings.

In addition, while being able to take a comprehensive history is an essential clinical skill, the time constraints necessary in an OSCE preclude this from being assessed. Similarly, because students’ skills are assessed in sections, it is difficult to make decisions regarding students’ ability to assess and manage patients holistically (Barman, 2005). Even if one were able to construct stations that assessed all aspects of clinical skills, it would only test those aspects in isolation rather than comprehensively integrating them all into a single demonstration. Linked to that, the OSCE also has a potentially negative impact on students’ learning because it contains multiple stations that sample isolated aspects of clinical medicine. The student may therefore prepare for the examination by compartmentalising the skills and not completely understanding the connection between them (Shumway & Harden, 2003). There also seems to be some evidence that while the OSCE is an appropriate method of assessment in undergraduate medical education, it is less well-suited for assessing the in-depth knowledge and skills of postgraduate students (Patil, 1993).

Challenges with reliability in the clinical examination may arise from the fact that different students are assessed on different patients and one may come across a temperamental patient who may help some students while obstructing others. In addition, test scores may not reflect students’ actual ability as repetitive demands may fatigue the student, patient or examiner. Students’ fatigue due to lengthy OSCEs may may affect their performance. Moreover, some students affect experience greater tension before and during examinations, as compared to other assessment methods. In spite of efforts to control patient and examiner variability, inaccuracies in judgment due to these effects remain. (Barman, 2005).


Software for managing an OSCE

There is an increasing range of software that assists with setting up and running an OSCE. These services often run a on a variety of mobile devices, offering portability and ease of use for examiners. One of the primary benefits of the using digital, instead of paper, scoring sheets is that the results are instantly available for analysis and for reporting to students. Examples of some of the available software include OSCE Online, OSCE Manager and eOSCE.


Ten OSCE pearls

The following list is taken from Dent & Harden (2005), and includes lessons learned from practical experiences of running OSCEs.

  1. Make all stations the same length, since rotating students through the stations means that you can’t have some students finishing before others.
  2. Linked stations require preparation. For example, if station 2 requires the student to follow up on what was done at station 1, then no student can begin at station 2. This means that a staggered start is required. In this case, one student would begin the exam before everyone else. Then, when the main exam begins, the student at station 1 will move to station 2. This student will finish one station before everyone else.
  3. Prepare additional standardised patients, and have additional examiners available to allow for unpredictable events detaining either one.
  4. Have backup equipment in case any of the exam equipment fails.
  5. Have staff available during the examination to maintain security and help students move between stations, especially those who are nervous at the beginning.
  6. If there is a missing student, move a sign labelled “missing student” to each station as the exam progresses. This will help avoid confusion when other students move into the unoccupied station by mistake.
  7. Remind students to remain in the exam room until the buzzer signals the end of the station, even if they have completed their task. This avoids having students standing around in the areas between rooms.
  8. Maintain exam security, especially when running the exam multiple times in series. Ensure that the first group of students are kept away from the second group.
  9. Make sure that the person keeping time and sounding the buzzer is well-prepared, as they have the potential to cause serious confusion among examiners and students. In addition, ensure that the buzzer can be heard throughout the exam venue.
  10. If the rotation has been compromised and people are confused, stop the exam before trying to sort out the problem. If a student has somehow missed a station, rather allow them the opportunity to return at the end and complete it then.


Take home points

  • The OSCE aims to improve the validity, reliability, objectivity and feasibility of assessing clinical competence in undergraduate medical students
  • The method is not without it’s challenges, which include the fact that it is resource intensive and therefore expensive
  • Factors which can play a role in reducing confidence in the test results include student, examiner and patient fatigue.
  • The best way to limit the influence of factors that negatively impact on the OSCE is to have a high number of stations.
  • Being well-prepared for the examination is the best way to ensure that it runs without problems. However, even when you are well-prepared, expect their to be challenges.
  • The following suggestions are presented to ensure a well-run OSCE:
    • Set an exam blueprint
    • Develop the station cases with checklists and rating scales
    • Recruit and train examiners
    • Recruit and train standardised patients
    • Plan space and equipment needs
    • Identify budgetary requirements
    • Prepare for last-minute emergencies



The use of the OSCE format for clinical examination has been shown to demonstrate improvements in reliability and validity of the assessment, allowing examiners to say with more confidence that students are proficient in the competencies that are tested. While OSCEs are considered to be more fair than other types of practical assessment, they do require significant investment in terms of finance, time and effort. However, these disadvantages are offset by the improvement in objectivity that emerge as a result of the approach.



curriculum ethics

Developing empathy in clinical education

This post was originally written for the Clinical Teacher iPad app, and can be downloaded there as well.


Empathy is the ability to understand the emotional context of other people and respond to them appropriately. It has been identified as the cornerstone of the clinician-patient relationship and is recognised as one of the most important characteristics of health care professionals that influence the patient’s outcomes and levels of satisfaction. However, even though it is clear that empathy is an essential aspect of clinical practice, there is evidence that empathy actually decreases as a result of medical education and clinical training. In fact, the greatest decrease in empathy seems to coincide with introduction of patient contact into the curriculum. If empathy really is valued in health care professionals, what changes need to take place in the health care curriculum in order to maintain the caring attitudes that students bring with them into their undergraduate training? How should clinical educators respond to the decline in empathy that seems to be a direct result of the clinical education process? This article explores the role of empathy in health care professional practice, as well as briefly identifies some strategies to further develop and maintain a caring attitude towards patients.

What is empathy?

Empathy is the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts and experiences of another human being, without having those feelings, thoughts and experiences communicated in an explicit manner. It is the capacity to share and understand another’s emotional state of mind and is often described as the ability to “put yourself into another’s shoes” (Ioannidou & Konstantikaki, 2008). In essence, empathy is the ability to understand the emotional makeup of other people and respond to them appropriately.

There are three types of empathy (Goleman, 2007):

  • Cognitive: knowing how another person feels and what they might be thinking
  • Emotional: physically feeling what another person is feeling
  • Compassionate: not only understanding a person’s situation and feeling with them, but being moved to help them

We can’t begin being empathetic when another person arrives. We have to already have made a space in our lives where empathy can thrive. And that means being open—truly open—to feeling emotions we may not want to feel. It means allowing another’s experiences to gut us. It means ceding control. Empathy begins with vulnerability. And being vulnerable, especially in our work, is terrifying. – Sara Watchter Boehner

See the video below for a presentation by Joan Halifax, a Buddhist who works with the terminally ill and those on death row, on the link between compassion and empathy.

Development of empathy in children

By the time that children are two years old they normally begin demonstrating empathy by responding emotionally to someone else’s emotional state. At this stage, toddlers will sometimes try to comfort others or show concern for them. Children between the ages of 7 and 12 appear to be naturally inclined to feel empathy for others in pain, a finding that is consistent with functional MRI studies of pain empathy among adults. Researchers have also determined that other areas of the brain were activated when young children saw another person intentionally hurt by another individual, including regions involved in moral reasoning (Goleman, 1995). The evidence seems to be that from a very young age, children are predisposed towards feeling an emotional response when confronted with another person’s suffering. This would seem to suggest that the emergence of empathy is an inherent characteristic of human development and which occurs spontaneously.

Empathy in clinical practice

Empathy, in the context of health care, is the “…ability to communicate an understanding of a client’s world” and is a crucial aspect of all interactions between clinicians and patients (Reynolds, Scott & Jessiman, 1999). It is the clinicians way of saying (Egan, 1986, pg. 99):

I’m with you, I’ve been listening carefully to what you’ve been saying and expressing, and I’m checking if my understanding is accurate.

It is considered to be an appreciation of the patient’s emotions and associated expression of that awareness to the patient. Empathy is also believed to significantly influence patient satisfaction, adherence to medical recommendations, clinical outcomes, and professional satisfaction. In the clinical setting, the common definition of empathy has been expanded to include emotive, moral, cognitive and behavioral dimensions (Stepien & Baernstein, 2006):

  • Emotive: the ability to imagine patients’ emotions and perspectives
  • Moral: the physician’s internal motivation to empathise
  • Cognitive: the intellectual ability to identify and understand patients’ emotions and perspectives
  • Behavioral: the ability to convey an understanding of those emotions and perspectives back to the patient

These additional features of empathy highlight that emotional engagement and not just intellectual understanding is an important aspect of effective empathy. However, some have suggested that the emotional aspect of empathy brings it closer to sympathy. Confusing the two is a conceptual challenge whereby the clinician actually experiences the other person’s emotions, as opposed to simply appreciating that they exist. This is problematic because when clinicians sympathise with patients and share their suffering, it may lead to decreased objectivity, emotional fatigue and subsequent burnout.

During medical education, we first teach the students science, and then we teach them detachment. To these barriers to human understanding, they later add the armor of pride and the fortress of a desk between themselves and their patients. – Howard Spiro

Decline in empathy during medical training

Empathy has been identified as one of the most important characteristics of medical professionals and is routinely screened for among students. However, while the development of empathy seems to be an essential aspect of positive health care relationships, there is some evidence that as medical students move through the curriculum, their scores on tests of empathy drop, with the largest decrease occurring at about the same time that they begin to see patients. Studies show that the empathy scores of students in their preclinical years were higher than in their clinical years. In addition, gender was a significant predictor of empathy, with women having higher scores on tests of empathy than men. Students with high baseline empathy showed a smaller decrease in empathy scores than students with low baseline empathy during medical education. Self-reported empathy for patients, which is potentially a critical factor in good patient-centered care, seems to wane as students progress in their clinical training, particularly among those entering technology-oriented specialties (Chen et al., 2012).

What we need in medical schools is not to teach empathy, as much as to preserve it – the process of learning huge volumes of information about disease, of learning a specialised language, can ironically make one lose sight of the patient one came to serve; empathy can be replaced by cynicism – Abraham Verghese

There are good reasons for the decrease in empathy, including the fact that students work in high-stress environments that place significant pressure on them with heavy workloads, intense time pressures and a diminished sense of autonomy in the healthcare system. In many health systems productivity is valued and rewarded financially and doctors who don’t see as many patients as their peers are sometimes seen as slow and inefficient.The stress of studying and working in the clinical environment may eventually take its toll on students and clinicians in terms of their time, and physical and emotional well-being, all of which make it difficult for them to be empathic. The focus on science and rationality during medical training tends to emphasise detachment and objective clinical neutrality, and prioritises the technologic over the humanistic. Trying to find the right balance can be tricky (Lim, 2013).

In addition, the focus of medical education seems to devalue the patient as a human being. We often talk about the “case” rather than the person. The style of writing is “objective” and impersonal, where that which can be seen is given more importance than that which can be heard. Often the patient is seen as a model, a body to be treated, or a good “teaching case” that illustrates a point (Shapiro, 1992). If we accept that decreased empathy as a direct result of participation in the medical curriculum is undesirable, we need to ask how we can address the problem.

We start with students who are very caring but have no diagnostic skills, and end up with physicians with great diagnostics skill but who don’t care. – Richard Frankel

Developing empathy in clinical education

It seems that empathy can be developed and it should therefore be possible to design a curriculum aimed at maintaining empathy during the third year of medical school. A curriculum where students are encouraged to discuss their patient reactions and emotional response in a safe environment during their clerkships may contribute to the preservation of empathy. Students can also be introduced to the idea that doctors can be taught that empathy is a skill that can be developed and maintained, as opposed to an inherent, unchangeable personality trait. Another strategy that can affect the development of empathy in students is the introduction of the Longitudinal Integrated Clerkship, which has been shown to have a positive impact on the patient-doctor relationship (Ogur et al., 2007).

An interesting perspective on developing empathy in medical education has also been to introduce modules that incorporate literature, movies, drama and poetry into the medical education curriculum. Some medical schools have gone so far as to integrate studies of the Humanities into their curricula, suggesting that the study of literature can help to achieve the following objectives (Shapiro & Rucker, 2003):

  • Stimulate skills of close observation and careful interpretation of patients’ language and behavior
  • Develop imagination and curiosity about patients’ experiences
  • Enhance empathy for patients’ and family members’ perspectives
  • Encourage relationships and emotional connections with patients
  • Emphasise a whole-person understanding of patients
  • Promote reflection on experience and its meaning

There is evidence that empathy and attitudes toward the Humanities in general improved significantly after participation in a literature-based module. In addition, students’ understanding of the patient’s perspective became more detailed and complex after the intervention. They were also more likely to note the ways in which reading literature might help them to cope with study-related stress (Shapiro et al., 2004).

Other strategies include interventions like role-playing and video analysis to try and preserve empathy during the challenging medical education process. Studies of these interventions, particularly the use of communication skill workshops, indicate that the behavioral dimension of empathy can be influenced through curriculum change (Stepien & Baernstein, 2006). In addition, programmes that aim to validate humanism in medicine (such as the Gold Humanism Honor Society) may reverse the decline in empathy (Rosenthal et al., 2011).

Studying the humanities may also be used to combat a perceived loss of empathy that may occur over the course of medical training. – Schwartz et al., 2009

It should be noted however, that current studies on empathy in medical students are challenged by varying definitions of empathy, small sample sizes, lack of adequate control groups, and variation among existing empathy measurement instruments (Stepien, 2006). Some of the empathy measures available have been assessed for research use among medical students and practising medical doctors. These studies have shown that empathy measures can be used as tools for investigating the role of empathy in medical education and clinical training. However, no empathy measures have been found with sufficient evidence of predictive validity for use as selection tools for entry into medical school (Hemmerdinger, 2007).

In the era of new health care policy and primary care shortages, research on empathy in medical students may have implications for the medical education system and admission policy for training institutions (Chen et al., 2012).

What we know matters, but who we are matters more. Being rather than knowing requires showing up and letting ourselves be seen. It requires us to dare greatly, to be vulnerable…Vulnerability is the birthplace of love, belonging, joy, courage, empathy, accountability, and authenticity. If we want greater clarity in our purpose or deeper and more meaningful spiritual lives, vulnerability is the path. – Brene Brown


There is clear evidence that empathy is an essential aspect of developing and maintaining effective clinician-patient relationships. However, there is also evidence to suggest that the process of clinical and medical education may actually lead to a decrease in empathy as a direct result of the way that clinical training is structured. Incorporating a range of strategies from the Humanities may help to maintain empathy in health care professional students, including using literature, poetry, art and music as ways for students to explore various aspects of empathic engagement. While it seems that the ability to measure empathy would have a significant influence on curriculum design, current studies of empathy have been criticised for a variety of reasons, indicating that stronger evidence is needed if we are to integrate the teaching and assessment of empathy in clinical education.


Chen, D.C., Kirshenbaum, D.S., Yan, J., Kirshenbaum, E. & Aseltine, R.H. (2012). Characterizing changes in student empathy throughout medical school. Medical Teacher, 34(4): 305-11. doi: 10.3109/0142159X.2012.644600.

Chen, D., Lew, R., Hershman, W. & Orlander. J. (2007). A cross-sectional measurement of medical student empathy. Journal of General Internal Medicine, October, 22(10): 1434-1438.

Ducharnme, J. (2013). Medical students diagnosed with low empathy. Boston Magazine.

Egan, G (1986). The skilled helper. Brooks-Cole, Monterey, CA.

Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ. Bantam Books. ISBN: 055338371X.

Hemmerdinger, J.M., Stoddart, S. & Lilford, R.J. (2007). A systematic review of tests of empathy in medicine. BMC Medical Education, 7:24, doi:10.1186/1472-6920-7-24.

Ioannidou, F., & Konstantikaki, V. (2008). Empathy and emotional intelligence: What is it really about? International Journal of Caring Sciences, 1(3), 118–123.

Lim, J. (2013). Empathy, the real measure of a doctor. Today Magazine.

Ogur, B., Hirsh, D., Krupat, E. & Bor, D. (2007). The Harvard Medical School-Cambridge integrated clerkship: an innovative model of clinical education. Academic Medicine, April, 82(4): 397-404.

Poncelet, A., Bokser, S., Calton, B., Hauer, K.E., Kirsch, H., Jones, T., Lai, C.J., Mazotti, L., Shore, W., Teherani, A., Tong, L., Wamsley, M. & Robertson, P. (2011). Development of a longitudinal integrated clerkship at an academic medical center. Medical Education Online, 16:10. Published online 2011 April 4. doi: 10.3402/meo.v16i0.5939.

Reynolds, W. J., Scott, B., & Jessiman, W. C. (1999). Empathy has not been measured in clients’ terms or effectively taught: A review of the literature. Journal of advanced nursing, 30(5): 1177–85.

Rosenthal, S., Howard, B., Schlussel, Y.R., Herrigel, D., Smolarz, G., Gable, B., Vasquez, J., Grigo, H. & Kaufman, M. (2011). Preserving empathy in third-year medical students. Academic Medicine, 86(3): 350-358.

Schwartz, A. W., Abramson, J. S., Wojnowich, I., Accordino, R., Ronan, E. J., & Rifkin, M. R. (2009). Evaluating the impact of the humanities in Medical Education. Mount Sinai Journal of Medicine, 76, 372–380. doi:10.1002/MSJ

Spiro, H. (1992). What is empathy and can it be taught? Annals of Internal Medicine, 116(10): 843–6.

Shapiro, J., Duke, A., Boker, J., & Ahearn, C. S. (2005). Just a spoonful of humanities makes the medicine go down: Introducing literature into a family medicine clerkship. Medical Education, 39(6): 605–12. doi:10.1111/j.1365-2929.2005.02178.x

Shapiro, J., Morrison, E., & Boker, J. (2004). Teaching empathy to first year medical students: evaluation of an elective literature and medicine course. Education for Health, 17(1): 73–84. doi:10.1080/13576280310001656196

Shapiro, J., & Rucker, L. (2003). Can poetry make better doctors? Teaching the humanities and arts to medical students and residents at the University of California, Irvine, College of Medicine. Academic medicine. Journal of the Association of American Medical Colleges, 78(10): 953–7.

Stepien, K.A. & Baernstein, A. (2006). Educating for empathy: A review. Journal of General Internal Medicine, 21(5): 524–530. doi: 10.1111/j.1525-1497.2006.00443.x

education health learning social media technology

Developing mobile apps for clinical educators

I’m happy and proud to announce that my first app has been released into the App store. I’ve been working on this project for a few months now, in collaboration with the excellent team at Snapplify, in order to get this release out the door. The name of the app is The Clinical Teacher, and it’s available for download in the app store.

The Clinical Teacher is a mobile reference app (currently only for the iPad and iPhone but soon for Android as well) aimed at clinicians, clinical supervisors and clinical educators who are interested in improving their teaching practices. The idea is to develop short summaries (5-10 pages) of concepts related to teaching and learning practice in the clinical context, integrating rich media with academic rigor. Think of the app as a library within which various articles will be published and made available for download.

Each article within the app is based on evidence and provides insight into teaching and learning strategies in the clinical context. The articles are developed from the ground up by domain experts, making use of peer-reviewed publications and open educational resources to deliver a concise summary of the topic being explored. Articles are comprehensive enough to give you a better understanding of the topic but concise enough to cover in one sitting. However, additional resources are also provided so that you can explore the topics in even more depth.

At the moment, the content is available for purchase for a minimal fee (e.g. the Peer Review of Teaching article is $0.99), although we will push out some articles for free as we move forward. We’re inviting clinical educators to consider publishing through The Clinical Teacher with the idea of developing content that is more “academic” than a blog post, but less so than a peer-reviewed publication. Apple and Snapplify both receive 30% of the cost of the article, meaning that the author receives 40% of whatever the article makes. And you get to have your content in the app store. This may change over time, depending on how much editorial and layout of articles we have to do before work can be published. If you’d like to write a short piece for The Clinical Teacher, submit your idea here.

The idea is that over time we’ll work with Snapplify to develop features in the app that move it beyond a content delivery app and integrate social features which we can use to create a community around teaching and learning practices in clinical education. But that’s for later. Right now it’s just great to see the app available after all the effort. I’d love to hear any feedback or suggestions for improvement.

Keep up to date on further development at   Google+   |   Twitter   |   Facebook

education PhD physiotherapy research social media teaching

SAFRI residential session

I’ve recently finished the second residential session of SAFRI, a programme for the development of research in medical education in Africa. I spent a big part of 2010 working on my SAFRI project (link to project notes), which I’ll be presenting at this years SAAHE conference in Potchefstroom. One of the main assignments for this session was the development of a poster presenting the results of my research project.

We spent most of the first day assisting the first year (2011) Fellows with the research projects that they’ll be implementing this year. I was surprised at how much more confident I felt in terms of being able to give feedback this year. Last year I felt a bit lost a lot of the time and wasn’t really sure of myself. It’s funny how you don’t really notice personal development until you’re in a similar situation as you were before and can compare your previous responses to current ones.

Over the course of the next few days we spent a lot of time discussing the following main topics, often using our individual projects as a foundation:

  • Various aspects of effective leadership
  • Research dissemination in the form of oral and poster presentations, and abstract development
  • The scholarship of teaching
  • Programme evaluation
  • Creating a portfolio of professional development

During the course of the session I had some really interesting conversations with other fellows around their research projects, which I’m hoping will lead to some interesting projects.

One of these is the possibility of introducing a clinical placement for physio students into a rural clinical school in Worcester. Some of our students want to go back to practice physiotherapy in small, rural villages in remote parts of the country, and a rural clinical setting would better prepare them for this.

The other project was one implemented by a palliative care physician who introduced an integrated tutorial on palliative care with small groups of students. Since I had a few students who shared personal experiences around patients with terminal illnesses, and their struggles around related issues, I thought I could learn a lot from attending the tutorial with the medical students and seeing if there’s anything that could translate to our students. I teach a section on Death and Dying in the Professional Ethics module, and this tutorial sounds really inspiring in terms of changing my approach.

On the whole, this session has been far less intensive than the first one, although I didn’t have much free time. I found that I spent a lot more time in discussion with other fellows, which was a great learning experience.

assignments conference education ethics health PhD physiotherapy research social media technology

Using social networks to develop reflective discourse in the context of clinical education

My SAFRI project for 2010 looked at the use of a social network as a platform to develop clinical and ethical reasoning skills through reflective discussion between undergraduate physiotherapy students. Part of the assignment was to prepare a poster for presentation at the SAAHE conference in Potchefstroom later this year, which I’ve included below.

I decided to use a “Facebook style” layout to illustrate the idea that research is about participating in a discussion, something that a social network user interface is particularly well-suited to. I also like to try and change perceptions around academic discourse and do things that are a little bit different. I hate the general idea that “academic” equals “boring” and think that this is such an exciting space to work in.


I also included a handout with additional information (including references) that I thought the audience might find interesting, but which couldn’t fit onto the poster.

One of the major challenges I experienced during this project was that I didn’t realise how much time it’d take to complete. I’d thought that the bulk of my time would be used on building and maintaining the social network and facilitating discussion within in, but the assignment design (see handout) took a lot more effort than I expected. I had to make sure that it was aligned with the module learning objectives, as well as the university graduate attributes.

In terms of moving this project forward, I think that it might be possible to use a social network as a focus for other activities that might contribute towards a more blended approach to learning and clinical education. For example:

  • Moving online discussions into physical spaces, either in the classroom or clinical environment
  • Sharing and highlighting student and staff work
  • Sharing social and personal experiences that indicate personal development, or provide platforms for supportive engagement
  • Extensions of classroom assignments
  • Connecting and collaborating with students and staff from other physiotherapy departments, both local and international
  • Helping students to acquire skills to help them navigate an increasingly digital world

I think that one of the most difficult challenges to overcome as I move forward with this project is going to be getting students and staff to embrace the idea that the academic and social spaces aren’t necessarily separate options. Informal learning often happens within social contexts, but universities are about timetables and schedules. How do you convince a staff member that logging into a social network at 21:00 on a Saturday evening might be a valuable use of their time?

If we can soften the boundary between “social” and “academic”, I think that there’s a lot of potential to engage in the type of informal discussion I see during clinical supervision, and which students have reported to really enjoy. I think that the social, cognitive and teacher presences from the Community of Inquiry model may help me to navigate this space.

If you can think of any other ways that social networks might have a role to play in facilitating the clinical education of healthcare professional students, please feel free to comment.

twitter feed

Twitter Weekly Updates for 2010-08-09

conference research

Reflections on SAAHE 2010

The SAAHE conference has come and gone for the 3rd year running. It’s been an interesting and engaging 3 days, and since I’ve already posted all my notes, these are just a few thoughts on what it’s like having a conference in South Africa. And it’s the last post, I promise.

To get the negative stuff out of the way, there were two things that really disappointed me, and which I’ve mentioned at every conference I’ve been to (in South Africa), and they are:

  • A lack of dedicated wireless access, even though internet access is not an issue at tertiary educational issue
  • No video or audio coverage of any of the tracks, not even of the keynote speakers (I’m sorry, but uploading presentations just doesn’t cut it)

As a collection of South African health educators who say they to participate in a global, regional and national conversation on these issues, how can you possibly do it if you have no voice? I can’t think of any reason not to provide dedicated access in all conference venues.

Piggy backing on this idea of what we could do with access, I had an interesting conversation with a colleague when we were trying to decide which presentations to attend. We realised that we were trying to situate our own work within the broader context of what was happening at the conference. Where does my work fit in with all the other work that’s being done in my own (or a similar) domain?

It seems to make sense that if all attendees (or a significant proportion) were tweeting, blogging, waving or otherwise engaged in providing their own personal experiences, perceptions, insights, etc., we would have multiple streams within which we would be able to situate our own work. Not that we would necessarily watch the streams while presenting (although that would be an option), but it would be nice to reference the work of others that you’d already seen in the stream. These referrals could be aggregated after the conference to see who’s working on similar ideas (or who should be working on similar ideas) and make it easier to build national networks for collaboration. What topics are most common? Who seems to be involved in the most conversations? Who are the “qualitative” people who can give me the insight I need for my own work?

Unfortunately, this won’t happen anytime soon. It’s not a technical problem (all the infrastructure and technology is there), but rather the complex human component. Besides a resistance to learn new things (“I’m a busy person, I don’t have the time”), most health educators aren’t technically savvy.

Finally, during the last half of the last day, we had a power outage across the campus and we had to continue outside. Interestingly, most people seemed quite amused with the experience. We got to sit outside and enjoy the beautiful weather and have a more informal (if a bit rushed) discussion. It was also refreshing for me having to present my work without a presentation on a computer. I felt a bit more connected with the audience, although being in such close proximity could also be a bit daunting. See below for our “conference venue”.

All in all, it was a great conference, I learned a lot and the organisers should be proud of what they achieved.