twitter feed

Twitter Weekly Updates for 2012-04-30

  • @nlafferty Hi Natalie, thanks for the mention and for the download #
  • RT @clin_teacher: Just published “The Delphi Approach for Reaching Consensus”. Check it out in the Clinical Teacher app #
  • RT @clin_teacher: 442 downloads in 52 countries after 1 week of being in the app store. Would love to hear your feedback #
  • Airplane Lavatory Self-Portraits <- Amazing #
  • @rachaellowe Nice, might have some cool things to show you by then #
  • @rachaellowe Hi Rachael, have actually been emailing with @rogerkerry1 about his project but unfortunately can’t assist at this time #
  • @s_eller Thanks 4 the RT & download, would love to hear your thoughts on the app. Also busy building something similar for clinical students #
  • @neil_mehta Thanks very much. It doesn’t have a lot of content yet, but I’m working with some authors to get more stuff published soon #
  • RT @sameshnaidoo: RT @zarsa: A Stark, a Lannister, a Baratheon, and a Targaryen walk into a bar… And then everyone you love dies #
  • @WhatAboutRob Thanks for coming round. Great service as always from the team at @Snapplify 🙂 #
twitter feed

Twitter Weekly Updates for 2012-04-09

PhD research

The Delphi method in clinical research

Thank you to Conran Joseph for his contribution to this post. We began developing this content as part of another project that we’re working on (more to come on that later) and then extended it as I made notes for a paper than I’m writing for my PhD.

The Delphi method was developed in the 1950’s with the purpose of soliciting expert opinion in order to reach consensus  (Dalkey & Helmer, 1963, p. 458). It was so named because it was originally developed as a systematic, interactive means of forecasting or prediction, much like ancient Grecians came to the Oracle at Delphi to hear of their fortunes. The approach relies on a collection of opinions from a panel of experts in a domain of real-world knowledge, and aggregates those decisions to reach consensus around a topic. It is different from traditional surveys in that it is an attempt to identify what could, or should be, as opposed to what is (Miller, 2006).

Delphi studies are generally used to (Delbecq, Van de Ven & Gustafson, 1975, pg. 11):

  • Determine or develop a range of possible program alternatives
  • Explore or expose underlying assumptions or information leading to different judgments
  • Seek out information which may generate a consensus on the part of the respondent group
  • Correlate informed judgments on a topic spanning a wide range of disciplines
  • Educate the respondent group as to the diverse and interrelated aspects of the topic

Some of the other key features in Delphi survey research is that the participants are unknown to each other and that the process is iterative, with each subsequent round being derived from the results of the previous one. In other words, each participant receives a summary of the range of opinions from the previous round, and is given an opportunity to reassess their own opinions based on the feedback of other panelists. This controlled feedback helps to reduce the effect of noise, defined as communication which distorts the data as it relates to individual interests and bias, rather than problem solving. The feedback occurs in the form of a summary of the prior iteration, distributed to the panel as an opportunity to generate additional insights and clarify what was captured in the previous iteration (Dalkey, 1972). In addition, participants need not be geographically collocated (i.e. can be physically dispersed). This provides some level of anonymity, which also serves to reduce the effect of dominant individuals and group pressure to conform.

Within the context of clinical education, Delphi studies have been used to develop assessment practices that are not always easy to define. The modifiable behaviours and clinical competence that clinical educators are interested in are not particularly the concepts and skills covered in the classroom, but rather their application in practice. Assessment of the knowledge and skills required for competent practice usually takes the form of a sampling of a small subset of the total possible range of items, since it isn’t feasible to assess all possible combinations. In addition, not all clinicians agree on what the most important components of practice and assessment are. The Delphi method is therefore an appropriate methodological approach that can be used to gain consensus around the critical issue of what to assess, how it should be assessed and what strategies can be used to improve practice. Delphi studies have been used in healthcare for the planning of services, analysis of professional characteristics and competencies, assessment tool design and curriculum (Cross, 1999; Powell, 2003; Joseph, Hendricks & Frantz, 2011).

Designing a Delphi study
The most important aspect of your Delphi study will be participant selection, as this will directly influence the quality of the results you obtain (Judd, 1972; Taylor & Judd, 1989; Jacobs, 1996). Participants who are selected to participate in a Delphi survey are usually experts in the field, and should provide valuable input to improve the understanding of problems, opportunities and solutions. Having said that, there is no standard description of who should be included in the panel, nor what an “expert” is (Kaplan, 1971). Although there are no set criteria that one can use to select the panel, eligible participants should come from related backgrounds and experiences within the domain, are capable of making helpful contributions, and be open to adapting their opinion for the purpose of achieving consensus. It is not enough for participants to simply be knowledgeable in the domain being explored (Pill, 1971; Oh, 1974). While it is recommended that general Delphi studies use a heterogeneous panel (Delbecq, et al., 1975), Jones and Hunter (1995) suggest that domain specialists be used in clinical studies. Owing to the factors highlighted above, it is essential to establish the credibility of the panel, in order to support the claim that they are indeed experts in the field.

The next aspect to consider is the panel size. This is often dependent on the scope of the problem and the number of knowledgeable informants / experts who are available to you, and there is no agreement in the literature on what size is optimal (Hsu & Sandford, 2007). Depending on the context, it may be that the more participants there are, the higher the degree of reliability of the aspects mentioned. However, it has been suggested that 10 to 15 participants could be sufficient if their background is homogeneous (Delbecq, Van de Ven & Gustafson, 1975).

The first round of questionnaires usually consists of open-ended questions that are used to gather specific information about an area of domain of knowledge, and serves as a cornerstone for subsequent rounds (Custer, Scarcella, & Stewart, 1999). It is acceptable for this questionnaire to be derived from the literature (Hsu & Sandford, 2007) and need not be tested for validity or reliability. The structuring of the questionnaires, types of questions and number of participants will determine the data analysis techniques that are used to reach consensus. While the process could theoretically continue indefinitely, there is some agreement that three rounds of surveys are usually sufficient to reach a conclusion.

Typically, the results of the first round are often used to identify major themes emerging from the open-ended questions. Thereafter the responses are collated into questionnaires that will form the basis of the subsequent rounds. From the second round onwards the data is usually analysed quantitatively, using either a rank order or rating technique (this is usually dependent on larger sample sizes). The results are analysed in order to determine levels of agreement in the ranking order. Researchers caution that this level of agreement should be decided on before the commencement of the data collection and devise a plan of how the data will be analysed in order to have a clear cut-off point for inclusion and exclusion. The level of agreement is usually set at 75%, although this can be modified if agreement is not reached. In some cases, participants may also be asked to provide a rationale for their ranking decisions, especially when panelists provide opinions that lie outside the groups’ consensus for a domain or topic.

Procedure of running a Delphi study

  1. Determine your objectives. What is it that you want your panelists to achieve consensus on?
  2. Design your first set of questions using an extensive review of the available literature. Be sure to base this first round of questions on the objectives you wish to achieve.
  3. Test your questions for ambiguity, time, and appropriateness of responses. Send it out to a small sample of experts or at least colleagues and review their responses to ensure that your questions are useful in terms of achieving your objective.
  4. Send out the first round of the survey.
  5. Send a reminder for panelists to complete the first round, about 1-2 weeks after the initial survey was sent, although the actual time frames will depend on your study.
  6. Analyse the responses from round one, and use these results to design the survey for the second round.
  7. Test round two on a small sample of panelists, in order to make sure that the responses will provide the data you need.
  8. Send out the survey for the second round.
  9. Send a reminder for round two. Again the exact time will depend on your particular needs, and the context of your study.
  10. Analyse the responses from round two and use these results to design the survey for round three.
  11. Test the survey for the third round, and send it out when you are satisfied. Remind panelists to complete if necessary.
  12. Analyse the responses from the third round.
  13. Determine if your objectives have been achieved. Include additional rounds if you decide that you need more information.

Analysis of results
Quantitative analysis
The aspects to consider for the use of quantitative analysis are related to panel size and questionnaire design. Consequently this is often dependent on the scope of the problem and the number of knowledgeable informants/experts available to you. Some researchers believe that the more participants there are, the higher the degree of reliability of the aspects mentioned. The most widely used technique for gaining consensus in this paradigm is through obtaining an agreement level. Although controversy exists on the level or cut off point for agreement, numerous authors indicated a 75% of agreement as an appropriate level. Apart from obtaining a level of agreement other rating techniques are also commonly used to reach consensus. Some of these rating techniques include the of ranking elements in order of importance and calculating the mean to identify the most important to the least important elements. Also, likert -type scales are used to determine whether element should be included or not. Thus, the the nature of the analysis will depend strongly on the structuring of the questionnaires, types of questions and number of participants.

Qualitative analysis
A qualitative Delphi study does not rely on statistical measures to establish consensus among participants. Rather, it is the analysis of emergent themes (provided no structure was initially provided) that gives rise to the conclusion. The results from open-ended questions will usually be in the form of short narratives, which should be analysed using qualitative techniques. The researcher will review the responses and categorise them into emerging themes. This process will continue until saturation is reached i.e. until no new information or themes arise. These themes can then either be used to form the basis of the next round of questions (as in an exploratory or development Delphi study), or they can be used to derive a list of items that panelists can rank.

Advantages and disadvantages of using the Delphi method
Whereas in committees and face-to-face meetings, dominant individuals may monopolise the direction of the conversation, the Delphi method prevents this by placing all responses on an “equal” footing. Anonymity also means that participants should only take into account the information before them, rather than the reputation of any particular speaker. Anonymity also allows for the expression of personal opinions, open critique, and admission of errors by giving opportunities to revise earlier judgments. In addition, the researcher is able to filter, summarise and discard irrelevant information, which may be distracting for participants in face-to-face meetings. Thus, potentially distracting group dynamics are removed from the equation (Hsu & Sandford, 2007).

One of the major disadvantages is that there is a high risk of both low response rate and attrition. In addition, a Delphi study typically takes up a lot of time, and adds significantly to the workload of the researcher. However, it is felt that the advantages of using a Delphi study in the right context adds value that is difficult to achieve with other methods.

The Delphi method is a useful means of establishing consensus around topics that have no set outcomes and which are open to debate. The credibility of the panel you select for your study is vital if you want to ensure the results are taken seriously.


  • Butterworth T. & Bishop V. (1995) Identifying the characteristics of optimum practice: findings from a survey of practice experts in nursing, midwifery and health visiting. Journal of Advanced Nursing 22, 24–32
  • Cross, V. (1999). The Same But Different: A Delphi study of clinicians’ and academics’ perceptions of physiotherapy undergraduates. Physiotherapy, 85(1), 28-39
  • Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The modified Delphi technique: A rotational modification. Journal of Vocational and Technical Education, 15 (2), 1-10
  • Dalkey, N. C. & Helmer, O. (1963). An experimental application of the Delphi Method to the use of experts. Management Science, 9(3), 458 – 468
  • Delbecq, A.L., Van de Ven, A.H. & Gustafson, D.H. (1975). Group Techniques for Program Planning: a guide to nominal group and Delphi processes
  • Hsu, C.-chien, & Sandford, B. (2007). The Delphi Technique: Making sense of consensus. Practical Assessment, Research and Evaluation, 12(10)
  • Jacobs, J. M. (1996). Essential assessment criteria for physical education teacher education programs: A Delphi study. Unpublished doctoral dissertation, West Virginia University, Morgantown
  • Jones J. & Hunter, D. (1995). Qualitative research: Consensus methods for medical and health services research. British Medical Journal, 311, 376–380
  • Joseph, C., Hendricks, C., & Frantz, J. (2011). Exploring the Key Performance Areas and Assessment Criteria for the Evaluation of Students’ Clinical Performance: A Delphi study. South African Journal of Physiotherapy, 67(2), 1-7
  • Judd, R. C. (1972). Use of Delphi methods in higher education. Technological Forecasting and Social Change, 4 (2), 173-186
  • Kaplan, L. M. (1971). The use of the Delphi method in organizational communication: A case study. Unpublished master’s thesis, The Ohio State University, Columbus
  • Miller, L. E. (2006, October). Determining what could/should be: The Delphi technique and its application. Paper presented at the meeting of the 2006 annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio
  • Murphy M.K., Black N., Lamping D.L., McKee C.M., Sanderson C.F.B., Askham J. et al. (1998) Consensus development methods and their use in clinical guideline development. Health Technology Assessment 2(3)
  • Oh, K. H. (1974). Forecasting through hierarchical Delphi. Unpublished doctoral dissertation, The Ohio State University, Columbus
  • Pill, J. (1971). The Delphi method: Substance, context, a critique and an annotated bibliography. Socio-Economic Planning Science, 5, 57-71
  • Powell, C. (2003). The Delphi technique: myths and realities. Journal of advanced nursing, 41(4), 376-82
  • Skulmoski, G. J., & Hartman, F. T. (2007). The Delphi Method for Graduate Research. Journal of Information Technology Education, 6
PhD physiotherapy research

Results of my Delphi first round

I’ve recently finished the analysis of the first round of the Delphi study that I’m conducting as part of my PhD. The aim of the study is to determine the personal and professional attributes that determine patient outcomes, as well as the challenges faced in clinical education. These results will serve to inform the development of the next round, in which clinical educators will suggest teaching strategies that could be used to develop these attributes, and overcome the challenges.

Participants from the first round had a wide range of clinical, supervision and teaching experience, as well as varied domain expertise. Several themes were identified, which are summarised below.

In terms of the knowledge and skills required of competent and capable therapists, respondents highlighted the following:

  • They must have a wide range of technical and interpersonal skills, as well as a good knowledge base, and be prepared to continually develop in this area.
  • Professionalism, clinical reasoning, critical analysis and understanding were all identified as being important, but responses contained little else to further explain what these concepts mean to them.

In terms of the personal and professional attributes and attitudes that impact on patient care and outcomes, respondents reported:

  • A diverse range of personal values that they believe have relevance in terms of patient care
  • These values were often expressed in terms of a relationship, either between teachers and students, or between students and patients
  • Emotional awareness (of self and others) was highlighted

In terms of the challenges that students face throughout their training:

  • Fear and anxiety, possibly as a result of poor confidence and a lack of knowledge and skills, leading to insecurity, confusion and uncertainty
  • Lack of self-awareness as it relates to their capacity to make effective clinical decisions and reason their way through problems
  • A disconnect between merely “providing a service” and “serving”
  • They lack positive and supportive clinical learning environments, have poor role models and often aren’t given the time necessary to reflect on their experiences
  • The clinical setting is complex and dynamic, a fact that students struggle with, especially when it comes to dealing with complexity and uncertainty inherent in clinical practice
  • Students often “silo” knowledge and skills, and struggle to transfer between different contexts
  • Students struggle with the “hidden culture” of the professional i.e. the language, values and norms that clinicians take for granted

These results are not significantly different from the literature in terms of the professional and personal attributes that healthcare professionals deem to be important for patient outcomes.

The second round of the Delphi is currently underway and will focus on the teaching  strategies that could potentially be used to develop the attitudes and attributes highlighted in the first round.