PHT402: Final thoughts and moving forward

This is a short review post for the PHT402 Professional Ethics course that was recently completed by physiotherapy students from the University of the Western Cape and qualified physiotherapists who participated through Physiopedia. We believe that this is the first time that a completely open, online course in professional ethics has been run as part of a formal undergraduate health care curriculum.

2049233526_358678b16eIn total we had 52 UWC students and 36 external participants from around the world, including South Africa, USA, United Kingdom, India, New Zealand, Estonia, Saudi Arabia and Canada. The context of the course, objectives, course activities and participant learning portfolios are available on the project page, so I won’t go over those again other than to say that the course was aimed at developing in students a set of attributes that went beyond simply teaching them about concepts in professional ethics. In other words, it was about trying to change ways of thinking and being, as opposed to teaching content. It’s too early to say whether or not we achieved this but if nothing else, we do seem to have made a significant impact in the personal and professional lives of some of the participants.

One of the most interesting things about this course has been the enormous variety of perspectives that emerged, which on a personal level have driven my thinking and reasoning in different directions than if I had engaged with the topic in isolation. From one of the participants, “…it brings on thoughts that I find unsettling“. This is a good thing. One of the points of the course was to put people into those contested spaces where the “right” and “wrong” answers are ambiguous and context dependent. The more we explore those spaces within ourselves and with others, the better prepared we’ll be to navigate difficult ethical situations in our professional practice.

Running the PHT402 Professional Ethics course in this way has been an enormous learning experience for me and many lessons emerged during the course that were unanticipated. Here are some of the things we did that I’ve never done before and which challenged us to think about different ways of teaching and learning:

  • Participants were mostly unfamiliar with how the internet works and so had no experience with following the work of others. We needed to give very explicit instructions regarding setting up blogs and following other participants. Email support was extensive and many participants were regularly in contact. I learned that email is still an essential aspect of working digitally.
  • Participants were geographically distributed and most had never had any blogging experience. We needed to figure out how to teach them how to blog without being able to get them all into a classroom. We wanted not only to teach them how to simply write blog posts but to also include embedded media, linking to other participants, using tags and categories. We wrote a series of posts that were designed to not only give instructions on how to blog but also how to write engaging posts on the web. Every participant was encouraged to follow the participants to ensure that they were exposed to this input.
  • It wasn’t possible for the facilitators to comment on every post of every user (although I gave it my best shot) but we had to make sure that everyone got feedback of some kind on their posts. We designed a form in Google Forms and asked every participant to review the work of 3 other participants. Then we aggregated that feedback, which was quantitative and qualitative) and sent it to each participant. In this way, we ensured that everyone got feedback in one form, even if they weren’t getting comments on their posts.
  • It’s difficult to give a grade (this was part of a formal curriculum, so grades were unfortunately a necessity) for participants’ perceptions of topics like equality, morality and euthanasia. We decided the students would be graded on the extent to which they could demonstrate evidence of learning in their final posts. We said that this could be in the form of identifying personal conflict and resolution (one of the aims of the course), linking to the posts of others with analysis and integration of those alternative ideas (learning collaboratively), use of the platform features e.g. tagging, categories, Liking, Commenting, etc. (using technology to enable richer forms of communication). I created a rubric that is more extensive than this list, but it just goes to show that the assessment of a course like this needs to be about more than simply asking if the student covered the relevant content.

Now that this course has been completed, I plan to do research on the data that was generated. This was always part of the project and as such it had ethical clearance from my institutional review board from the outset:

  • I designed the learning environment using principles that I had developed as part of my PhD project. This course could be seen as a pilot study aimed at further testing those design principles as a way of developing a set of Graduate Attributes in an online learning space. To this end I’ll be doing a series of focus groups to find out from students whether or not the course objectives were achieved.
  • In addition to the focus groups I’d like to try and triangulate that data with a content analysis of the blog posts and comments that were generated during the course. I’ll qualitatively analyse the course outputs that were created by participants.
  • I’d like to survey all of the participants to get a general sense of their experiences and perceptions of having completed a course that was very different to what they were used to from a traditional curriculum. I’d like to find out if offering a course in this way is something that we should be looking at in more depth in our department.
  • During the course, a significant number of connections were made between people on the open web. I’d like to use social network analysis to see if there’s anything interesting that emerged as a result of how people connected with each other. If you have any suggestions for methods to analyse a set of blog posts on WordPress, please let me know.
  • Finally, I want to interview the other facilitators who helped me to develop the course and who were based in different countries at different times in the project. I want to see if there are any lessons that could be developed for other, geographically dispersed teachers who would like to run collaborative online courses.

Alternative ways of sharing my PhD output

“Online journals are paper journals delivered by faster horses”

– Beyond the PDF 2

I’ve started a process of creating a case study of my PhD project, using my blog as an alternative means of presenting and sharing my results. Most of the chapters have already either been published or are under review with peer-reviewed journals, so I’ve played my part in the publishing game and jumped through the hoops of my institution. The full-length thesis has also been lodged with the institutional repository, so it is available, but in all honesty it’s a big, unwieldy thing, difficult to navigate and work through for all but the most invested reader.

Initially I thought that the case study would simply be a summary of the entire project but quickly realised that this would defeat the object of using the format. If people want the “academic” version, with the full citations, reference lists, standard headings (Background, Method, Results, etc.) then they’d still be able to download the published paper or even just read the abstract as a summary. The online case study should be more blog / wiki, than peer-reviewed paper. I’m starting to realise that one of the great things about the PhD-by-publication approach is that with the papers already peer-reviewed and published, I’m freed from having to continue playing the game. I get to do whatever I want to with the case study, because the “serious, academic” stuff is done.

After exploring a few other options (see list below), I decided that HTML was the best way to share the process in a format that would be more flexible and engaging than a PDF. HTML is a text-based format that degrades well (i.e. old browsers, mobile browsers and slow internet connections can all deal reasonably well with text files) while at the same time allowing for advanced features like embedded video and presentations. Also, being an open standard, HTML is unlikely to suffer from the problems of software updates that disable functionality available in previous versions. Think how many people were (and continue to be) inconvenienced by Microsoft’s move from the .doc to the .docx format.

Here are some of the features I thought were important for whatever platform I chose to disseminate my research. It should:

  • Be based on an open standard so that it would always be readable or backwards compatible with older software
  • Have the ability to embed multimedia (video, audio, images, slideshows)
  • Enable some form of interaction with the reader
  • Have a responsive user interface that adapts to different devices and screen sizes i.e. it should be device independent
  • Allow the content to be presented in a visually attrative format (“Pretty” is a feature“)
  • Be able to be adapted and maintained easily over time
  • Be able to export the content in multiple formats (e.g. Word, ODT, PDF)

Before deciding on using HTML and this blog, here is a list of the alternative diseemination methods I considered, and the reasons I decided not to go with them:

  • ePub is an open standard and can potentially be presented nicely, but not all ePud readers are created equal and I didn’t want anyone to have to jump through hoops to read my stuff. For example, an ebook published to the Kindle may not display in iBooks.
  • PDF is simple, open standard, easy to create but too rigid in the sense that it conforms to “digital paper” paradigm. It wouldn’t allow me to be flexible in how content is displayed or shared.
  • Google+ is visually pleasing but it is not open (the API is still read-only) and I have no idea if it will be around in a few years time.
  • Github was probably never a real option, but I like the idea of a collaborative version control system that allows me (and potentially others) to update the data over time, capturing all the changes made. However, it is simply too technical for what I wanted to do.
  • Tiddlywiki actually seemed like it might win out, since it’s incredibly simple to use, and is visually appealing with a clean user interface. I even began writing a few notes using it. The problem was that once I decided that HTML was the way to go, there wasn’t a strong enough reason to use anything other than my own blog.

If you’re interested in exploring this idea further, check out the Force11 White Paper: Improving The Future of Research Communications and e-Scholarship as a manifesto for alternative methods of sharing research.

PhD project using design research

I’m supposed to be submitting my thesis in about 3 weeks time, so obviously I’m getting distracted by anything that means I can avoid that nightmare. Which is why I spent about an hour this morning making this nice flowchart. Putting complex things into pictures makes them easier for me to understand, so making this graphic was just a way for me to make sure that I actually do understand what I’m supposed to be doing. If you see anything fundamentally flawed with this process, please make sure that you keep it to yourself. Seriously.

Note: the major phases of the project are on the left, key aspects of each phase in the middle, and outcomes of the phase that lead to the next one on the right. Numbers in brackets highlight the chapter in which the item is described. All chapters except 1, 9 & 11 are written as articles for publication.

Social media and professional identity: Part 3 (Mendeley)

Academic social networks: Mendeley
Everyone is familiar with Facebook and many people have heard of Google+ so I’m not going to spend much time reviewing them, other than to say that for me, neither of them is currently a big part of my own professional presence. I use Google+ a lot but in a personal capacity not a professional one. Having said that, I’m exploring the potential of Google+ as a tool for professional development, and will probably post something about my experiences at some point in the future.

In this section I’m going to briefly discuss a few social networks that are geared towards the academic professional, although not necessarily the clinician. If you are a clinician, you may still find these social services useful, but in my experience I’ve found that clinicians are more likely to share content on the more mainstream networks like Facebook and increasingly, on Twitter.


First up is Mendeley, which is primarily a desktop (and iPad and smartphone) client that you can use to manage the research papers that you have in PDF format. It automatically extracts all of the metadata from the paper (i.e. author, title, journal, date of publication) and has some excellent search and sort features. However, one of the best features of Mendeley is its integration with the web, allowing you to sync all of your documents from any of your devices, to all of your other devices. If I highlight and add annotations to a PDF I’m working on at the office, when I get home and sync Mendeley on my home computer, all of those highlights and PDFs are updated to mirror that changes I made at work. If I add a PDF on my home computer, that PDF is then copied to all of my other devices as well. If you’ve ever been working at home and been irritated that the document you need is at work, or lost the flash drive you use to keep all your research papers, then Mendeley is definitely worth having a look at.

 

Mendeley is also great for connecting you to other researchers in your field, via a web interface. You have to create a profile to use the software, and by completing the profile, you make yourself more visible to other people in your network of practice. There’s a Newsfeed that tells you when people you follow have made changes (e.g. uploaded and shared a new paper, made a comment, or joined a group). At the moment, a search on Mendeley for “clinical education” identifies about 80 people who are involved in clinical education research in some way, and almost 37 000 academic papers that include clinical education as keywords. There is an Advanced search feature that allows you to refine it your search to minute detail, including the specific domain of knowledge you’re looking for. Mendeley is one of the fastest-growing research databases, and with the social features that are built in, it’s also very engaging.

 

In the screenshot below, you can see how it’s possible to access the metadata from all of your PDFs via the web interface.

Mendeley is an excellent application and service that I use for organising the research content I already have, as well as for finding new content but in a narrow research field. It works really well for putting you in touch with other researchers who work in similar areas to you, and the Dashboard / Newsfeed view on the web makes it easy to keep up with those you’re following. In addition to desktop and web versions, Mendeley is available in a “Lite” version for the iPad (see below), and the open API makes it easy for developers to create 3rd party apps for Android, for example, Droideley.

Mendeley running on the iPad, showing the “Favourites” view.

Note: Zotero is another free alternative for gathering and curating your research content. I don’t use it much, mainly because it used to be solely integrated into Firefox, which is a good thing – if you use Firefox. Zotero has recently released a standalone client which is independent of the browser.

In Part 4 of this series on the use of social media for professional development I’ll be presenting some of the features of ResearchGate, another social network geared towards academics.

Posted to Diigo 06/15/2012

    • we have only begun to understand the ways that the “social life of information” and the social construction of knowledge can reshape the ways we create learning experiences in the formal college curriculum
    • we define social pedagogies as design approaches for teaching and learning that engage students with what we might call an “authentic audience” (other than the teacher), where the representation of knowledge for an audience is absolutely central to the construction of knowledge in a course
    • social pedagogies strive to build a sense of intellectual community within the classroom and frequently connect students to communities outside the classroom
    • social pedagogies are particularly effective at developing traits of  “adaptive expertise,” which include the ability of the learner to use knowledge flexibly and fluently, to evaluate, filter and distill knowledge for effect, to translate knowledge to new situations, and to understand the limits and assumptions of one’s knowledge.
    • Equally as important is the cultivation of certain attitudes or dispositions characteristic of adaptive experts, including the ability to work with uncertainty, adapt to ambiguity or even failure, and to feel increasingly comfortable working at the edges of one’s competence
    • These kinds of adaptive traits—however valued they may be in the academy in the abstract—are often invisible and elusive in the course design and assessment process.  Designing a course that promotes, supports, and perhaps even evaluates these kinds of traits students implies that they have to be ways to make these effects visible—through some form of communication
    • Acts of representation are not merely vehicles to convey knowledge; they shape the very act of knowing
    • One of the salient research areas for higher education (and indeed other settings, such as organizational learning) is how to harness the effectiveness of informal learning in the formal curriculum.
    • Our understanding of learning has expanded at a rate that has far outpaced our conceptions of teaching. A growing appreciation for the porous boundaries between the classroom and life experience, along with the power of social learning, authentic audiences, and integrative contexts, has created not only promising changes in learning but also disruptive moments in teaching.
    • Our understanding of learning has expanded at a rate that has far outpaced our conceptions of teaching.
    • Christensen coined the phrase disruptive innovation to refer to a process “by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves ‘up market,’ eventually displacing established competitors.”
    • We might say that the formal curriculum is being pressured from two sides. On the one side is a growing body of data about the power of experiential learning in the co‑curriculum; and on the other side is the world of informal learning and the participatory culture of the Internet. Both of those pressures are reframing what we think of as the formal curriculum.
    • These pressures are disruptive because to this point we have funded and structured our institutions as if the formal curriculum were the center of learning
    • All of us in higher education need to ask ourselves: Can we continue to operate on the assumption that the formal curriculum is the center of the undergraduate experience?
    • higher education was in a powerful transition, moving from an instructional paradigm to a learning paradigm—from offering information to designing learning experiences, from thinking about inputs to focusing on outputs, from being an aggregation of separate activities to becoming an integrated design
    • our understanding of learning is expanding in ways that are at least partially incompatible with the structures of higher education institutions
    • these pressures for accountability are making us simultaneously more thoughtful and more limited in what we count as learning
    • The question that campus leaders need to address is how to reinvent a curriculum that lives in this new space
    • Technologies can play a key role here as new digital, learning, and analytics tools now make it possible to replicate some features of high‑impact activity inside classrooms, whether through the design of inquiry-based learning or through the ability to access and manipulate data, mount simulations, leverage “the crowd” for collaboration and social learning, or redesign when and how students can engage course content. Indeed, one of the most powerful aspects of today’s technologies is that many of the high‑impact features that used to be possible only in small classes can now be experienced not only at a larger scale but, in some cases, to better effect at larger scale.
    • A second response to the location problem of high-impact practices is to design for greater fluidity and connection between the formal curriculum and the experiential co-curriculum. An example is the use of e-portfolios, which allow students to organize learning around the learner rather than around courses or the curriculum.
    • “Drawing on the power of multimedia and personal narrative, recursive use of ePortfolio prompts students to expand their focus from individual courses to a broader educational process.”
    • The continued growth of e-portfolios across higher education reveals a restless search for ways to find coherence that transcends courses and the formal curriculum
    • A second pressure on the formal curriculum is the participatory culture of the web and the informal learning that it cultivates.
    • They looked at a range of web cultures, or participatory cultures, including Wikipedia, gaming environments, and grassroots organizations. They compiled a list of what they considered to be the shared and salient features of these powerful web-based communities:

      • Low barriers to entry
      • Strong support for sharing one’s contributions
      • Informal mentorship, from experienced to novice
      • A sense of connection to each other
      • A sense of ownership in what is being created
      • A strong collective sense that something is at stake
  • How many college classrooms or course experiences include this set of features? In how many courses do students feel a sense of community, a sense of mentorship, a sense of collective investment, a sense that what is being created matters?
  • Maybe that’s the intended role of the formal curriculum: to prepare students to have integrative experiences elsewhere
  • the typical school curriculum is built from content (“learning about”) leading to practice (“learning to be”), where the vast majority of useful knowledge is to be found. In a typical formal curriculum, students are first packed with knowledge, and if they stick with something long enough (i.e., major in a discipline), they eventually get to the point of engaging in practice. Brown argues that people instead learn best by “practicing the content.” That is, we start in practice, and practice drives us to content. Or, more likely, the optimal way to learn is reciprocally or spirally between practice and content.
  • Brown’s formulation echoes the growing body of inductive and inquiry-based learning research that has convincingly demonstrated increased learning gains, in certain well-designed conditions, when students are first “presented with a challenge and then learn what they need to know to address the challenge.”
  • how do we reverse the flow, or flip the curriculum, to ensure that practice is emphasized at least as early in the curriculum as content? How can students “learn to be,” through both the formal and the experiential curriculum?
  • In the learning paradigm, we are focusing not on the expert’s products but, rather, on the expert’s practice.
  • we help faculty analyze their teaching by slowing down and thinking about what it is that a student needs to do well in order to be successful with complex tasks
  • Which department is responsible for teaching students how to speak from a position of authority? Where do we find evidence of someone learning to speak from a position of authority? Which assessment rubric do we use for that? Critical thinking? Oral and written communication? Integrative learning? Lifelong learning? Of course, when faculty speak of “authority,” they mean not just volume, but the confidence that comes from critical thought and depth. Learning to “speak from a position of authority” is an idea rooted in expert practice. It is no more a “soft skill” than are the other dimensions of learning that we are coming to value explicitly and systematically as outcomes of higher education—dimensions such as making discerning judgments based on practical reasoning, acting reflectively, taking risks, engaging in civil if difficult discourse, and proceeding with confidence in the face of uncertainty.
  • Designing backward from those kinds of outcomes, we are compelled to imagine ways to ask students, early and often, to engage in the practice of thinking in a given domain, often in the context of messy problems.
  • What is the relationship between the intermediate activity and the stages of intellectual development or the constituent skills and dispositions of a discipline? What if the activities enabled by social media tools are key to helping students learn how to speak with authority?
  • If our concept of learning has outstripped our notion of teaching, how can we expand our notion of teaching—particularly from the perspective of instructional support and innovation?
  • In the traditional model of course design, a well-meaning instructor seeking to make a change in a course talks separately with the teaching center staff, with the technology staff, with the librarians, and with the writing center folks. Then, when the course is implemented, the instructor alone deals with the students in the course—except that the students are often going back for help with assignments to the technology staff, to the librarians, and to the writing center folks (although usually different people who know nothing of the instructor’s original intent). So they are completing the cycle, but in a completely disconnected way. Iannuzzi’s team‑based design thinks about all of these players from the beginning. One of the first changes in this model is that the instructor is no longer at the center. Instead, the course and student learning are at the center, surrounded by all of these other players at the table.
  • A key aspect of the team-based design is the move beyond individualistic approaches to course innovation. In higher education, we have long invested in the notion that the way to innovate is by converting faculty. This move represents a shift in strategy: instead of trying to change faculty so that they might change their courses, this model focuses on changing course structures so that faculty will be empowered and supported in an expanded approach to teaching as a result of teaching these courses.
  • we need to move beyond our old assumptions that it is primarily the students’ responsibility to integrate all the disparate parts of an undergraduate education. We must fully grasp that students will learn to integrate deeply and meaningfully only insofar as we design a curriculum that cultivates that; and designing such a curriculum requires that we similarly plan, strategize and execute integratively across the boundaries within our institutions.
  • we need to think more about how to move beyond the individualistic faculty change model. We need to get involved in team-design and implementation models on our campuses, and we need to consider that doing so could fundamentally change the ways that the burdens of innovation are often placed solely on the shoulders of faculty (whose lives are largely already overdetermined) as well as how certain academic support staff (e.g., IT organizations, student affairs, librarians) think of their professional identities and their engagement with the “curriculum.”
    • Thomson Reuters assigns most journals a yearly Impact Factor (IF), which is defined as the mean citation rate during that year of the papers published in that journal during the previous 2 years.
    • Jobs, grants, prestige, and career advancement are all partially based on an admittedly flawed concept
    • Impact factors were developed in the early 20th century to help American university libraries with their journal purchasing decisions. As intended, IFs deeply affected the journal circulation and availability
    • Until about 20 years ago, printed, physical journals were the main way in which scientific communication was disseminated
    • Now we conduct electronic literature searchers on specific subjects, using keywords, author names, and citation trees. As long as the papers are available digitally, they can be downloaded and read individually, regardless of the journal whence they came, or the journal’s IF.
    • This change in our reading patterns whereby papers are no longer bound to their respective journals led us to predict that in the past 20 years the relationship between IF and papers’ citation rates had to be weakening.
    • we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.
    • in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.
    • As the relation between IF and paper quality continues to weaken, such simplistic cash-per-paper practices bases on journal IFs will likely be abandoned.
    • knowing that their papers will stand on their own might also encourage researchers to abandon their fixation on high IF journals. Journals with established reputations might remain preferable for a while, but in general, the incentive to publish exclusively in high IF journals will diminish. Science will become more democratic; a larger number of editors and reviewers will decide what gets published, and the scientific community at large will decide which papers get cited, independently of journal IFs.

Posted from Diigo. The rest of my favorite links are here.

Blended learning in clinical education

Later today I’m presenting a progress report on my PhD, at the UWC “Innovations in Teaching and Learning” colloquium. Here is the presentation:

Blogging taking a back seat for now

I’m in the process of writing up the final parts of my PhD and am hoping to submit a first full draft in August, in preparation for a final submission in November. I’m doing it by publication and so am focusing my attention on the last 2 articles I need to complete. I’ve published two, submitted one, have one almost ready for submission and a final paper that I haven’t begun yet. Together with the bridging pieces that connect the articles, I still have a lot of work to do, which is why I haven’t been blogging with any regularity lately. I’ll definitely pick up on this when my work has been submitted.

Results of the first round of my Delphi study

I’m away on a 3 day writing retreat where I’m trying to put together a full draft of the Delphi study that I’m busy wrapping up. I thought I’d take a break from writing  and do something different (from writing, I mean). I took the full text of the open-ended responses from the first round of my study and created this Wordle…because I can.

The questions were related to the attributes that clinicians and clinical supervisors thought healthcare students should have.

The Delphi method in clinical research

Thank you to Conran Joseph for his contribution to this post. We began developing this content as part of another project that we’re working on (more to come on that later) and then extended it as I made notes for a paper than I’m writing for my PhD.

Introduction
The Delphi method was developed in the 1950’s with the purpose of soliciting expert opinion in order to reach consensus  (Dalkey & Helmer, 1963, p. 458). It was so named because it was originally developed as a systematic, interactive means of forecasting or prediction, much like ancient Grecians came to the Oracle at Delphi to hear of their fortunes. The approach relies on a collection of opinions from a panel of experts in a domain of real-world knowledge, and aggregates those decisions to reach consensus around a topic. It is different from traditional surveys in that it is an attempt to identify what could, or should be, as opposed to what is (Miller, 2006).

Delphi studies are generally used to (Delbecq, Van de Ven & Gustafson, 1975, pg. 11):

  • Determine or develop a range of possible program alternatives
  • Explore or expose underlying assumptions or information leading to different judgments
  • Seek out information which may generate a consensus on the part of the respondent group
  • Correlate informed judgments on a topic spanning a wide range of disciplines
  • Educate the respondent group as to the diverse and interrelated aspects of the topic

Some of the other key features in Delphi survey research is that the participants are unknown to each other and that the process is iterative, with each subsequent round being derived from the results of the previous one. In other words, each participant receives a summary of the range of opinions from the previous round, and is given an opportunity to reassess their own opinions based on the feedback of other panelists. This controlled feedback helps to reduce the effect of noise, defined as communication which distorts the data as it relates to individual interests and bias, rather than problem solving. The feedback occurs in the form of a summary of the prior iteration, distributed to the panel as an opportunity to generate additional insights and clarify what was captured in the previous iteration (Dalkey, 1972). In addition, participants need not be geographically collocated (i.e. can be physically dispersed). This provides some level of anonymity, which also serves to reduce the effect of dominant individuals and group pressure to conform.

Within the context of clinical education, Delphi studies have been used to develop assessment practices that are not always easy to define. The modifiable behaviours and clinical competence that clinical educators are interested in are not particularly the concepts and skills covered in the classroom, but rather their application in practice. Assessment of the knowledge and skills required for competent practice usually takes the form of a sampling of a small subset of the total possible range of items, since it isn’t feasible to assess all possible combinations. In addition, not all clinicians agree on what the most important components of practice and assessment are. The Delphi method is therefore an appropriate methodological approach that can be used to gain consensus around the critical issue of what to assess, how it should be assessed and what strategies can be used to improve practice. Delphi studies have been used in healthcare for the planning of services, analysis of professional characteristics and competencies, assessment tool design and curriculum (Cross, 1999; Powell, 2003; Joseph, Hendricks & Frantz, 2011).

Designing a Delphi study
The most important aspect of your Delphi study will be participant selection, as this will directly influence the quality of the results you obtain (Judd, 1972; Taylor & Judd, 1989; Jacobs, 1996). Participants who are selected to participate in a Delphi survey are usually experts in the field, and should provide valuable input to improve the understanding of problems, opportunities and solutions. Having said that, there is no standard description of who should be included in the panel, nor what an “expert” is (Kaplan, 1971). Although there are no set criteria that one can use to select the panel, eligible participants should come from related backgrounds and experiences within the domain, are capable of making helpful contributions, and be open to adapting their opinion for the purpose of achieving consensus. It is not enough for participants to simply be knowledgeable in the domain being explored (Pill, 1971; Oh, 1974). While it is recommended that general Delphi studies use a heterogeneous panel (Delbecq, et al., 1975), Jones and Hunter (1995) suggest that domain specialists be used in clinical studies. Owing to the factors highlighted above, it is essential to establish the credibility of the panel, in order to support the claim that they are indeed experts in the field.

The next aspect to consider is the panel size. This is often dependent on the scope of the problem and the number of knowledgeable informants / experts who are available to you, and there is no agreement in the literature on what size is optimal (Hsu & Sandford, 2007). Depending on the context, it may be that the more participants there are, the higher the degree of reliability of the aspects mentioned. However, it has been suggested that 10 to 15 participants could be sufficient if their background is homogeneous (Delbecq, Van de Ven & Gustafson, 1975).

The first round of questionnaires usually consists of open-ended questions that are used to gather specific information about an area of domain of knowledge, and serves as a cornerstone for subsequent rounds (Custer, Scarcella, & Stewart, 1999). It is acceptable for this questionnaire to be derived from the literature (Hsu & Sandford, 2007) and need not be tested for validity or reliability. The structuring of the questionnaires, types of questions and number of participants will determine the data analysis techniques that are used to reach consensus. While the process could theoretically continue indefinitely, there is some agreement that three rounds of surveys are usually sufficient to reach a conclusion.

Procedure
Typically, the results of the first round are often used to identify major themes emerging from the open-ended questions. Thereafter the responses are collated into questionnaires that will form the basis of the subsequent rounds. From the second round onwards the data is usually analysed quantitatively, using either a rank order or rating technique (this is usually dependent on larger sample sizes). The results are analysed in order to determine levels of agreement in the ranking order. Researchers caution that this level of agreement should be decided on before the commencement of the data collection and devise a plan of how the data will be analysed in order to have a clear cut-off point for inclusion and exclusion. The level of agreement is usually set at 75%, although this can be modified if agreement is not reached. In some cases, participants may also be asked to provide a rationale for their ranking decisions, especially when panelists provide opinions that lie outside the groups’ consensus for a domain or topic.

Procedure of running a Delphi study

  1. Determine your objectives. What is it that you want your panelists to achieve consensus on?
  2. Design your first set of questions using an extensive review of the available literature. Be sure to base this first round of questions on the objectives you wish to achieve.
  3. Test your questions for ambiguity, time, and appropriateness of responses. Send it out to a small sample of experts or at least colleagues and review their responses to ensure that your questions are useful in terms of achieving your objective.
  4. Send out the first round of the survey.
  5. Send a reminder for panelists to complete the first round, about 1-2 weeks after the initial survey was sent, although the actual time frames will depend on your study.
  6. Analyse the responses from round one, and use these results to design the survey for the second round.
  7. Test round two on a small sample of panelists, in order to make sure that the responses will provide the data you need.
  8. Send out the survey for the second round.
  9. Send a reminder for round two. Again the exact time will depend on your particular needs, and the context of your study.
  10. Analyse the responses from round two and use these results to design the survey for round three.
  11. Test the survey for the third round, and send it out when you are satisfied. Remind panelists to complete if necessary.
  12. Analyse the responses from the third round.
  13. Determine if your objectives have been achieved. Include additional rounds if you decide that you need more information.

Analysis of results
Quantitative analysis
The aspects to consider for the use of quantitative analysis are related to panel size and questionnaire design. Consequently this is often dependent on the scope of the problem and the number of knowledgeable informants/experts available to you. Some researchers believe that the more participants there are, the higher the degree of reliability of the aspects mentioned. The most widely used technique for gaining consensus in this paradigm is through obtaining an agreement level. Although controversy exists on the level or cut off point for agreement, numerous authors indicated a 75% of agreement as an appropriate level. Apart from obtaining a level of agreement other rating techniques are also commonly used to reach consensus. Some of these rating techniques include the of ranking elements in order of importance and calculating the mean to identify the most important to the least important elements. Also, likert -type scales are used to determine whether element should be included or not. Thus, the the nature of the analysis will depend strongly on the structuring of the questionnaires, types of questions and number of participants.

Qualitative analysis
A qualitative Delphi study does not rely on statistical measures to establish consensus among participants. Rather, it is the analysis of emergent themes (provided no structure was initially provided) that gives rise to the conclusion. The results from open-ended questions will usually be in the form of short narratives, which should be analysed using qualitative techniques. The researcher will review the responses and categorise them into emerging themes. This process will continue until saturation is reached i.e. until no new information or themes arise. These themes can then either be used to form the basis of the next round of questions (as in an exploratory or development Delphi study), or they can be used to derive a list of items that panelists can rank.

Advantages and disadvantages of using the Delphi method
Whereas in committees and face-to-face meetings, dominant individuals may monopolise the direction of the conversation, the Delphi method prevents this by placing all responses on an “equal” footing. Anonymity also means that participants should only take into account the information before them, rather than the reputation of any particular speaker. Anonymity also allows for the expression of personal opinions, open critique, and admission of errors by giving opportunities to revise earlier judgments. In addition, the researcher is able to filter, summarise and discard irrelevant information, which may be distracting for participants in face-to-face meetings. Thus, potentially distracting group dynamics are removed from the equation (Hsu & Sandford, 2007).

One of the major disadvantages is that there is a high risk of both low response rate and attrition. In addition, a Delphi study typically takes up a lot of time, and adds significantly to the workload of the researcher. However, it is felt that the advantages of using a Delphi study in the right context adds value that is difficult to achieve with other methods.

Conclusion
The Delphi method is a useful means of establishing consensus around topics that have no set outcomes and which are open to debate. The credibility of the panel you select for your study is vital if you want to ensure the results are taken seriously.

References

  • Butterworth T. & Bishop V. (1995) Identifying the characteristics of optimum practice: findings from a survey of practice experts in nursing, midwifery and health visiting. Journal of Advanced Nursing 22, 24–32
  • Cross, V. (1999). The Same But Different: A Delphi study of clinicians’ and academics’ perceptions of physiotherapy undergraduates. Physiotherapy, 85(1), 28-39
  • Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The modified Delphi technique: A rotational modification. Journal of Vocational and Technical Education, 15 (2), 1-10
  • Dalkey, N. C. & Helmer, O. (1963). An experimental application of the Delphi Method to the use of experts. Management Science, 9(3), 458 – 468
  • Delbecq, A.L., Van de Ven, A.H. & Gustafson, D.H. (1975). Group Techniques for Program Planning: a guide to nominal group and Delphi processes
  • Hsu, C.-chien, & Sandford, B. (2007). The Delphi Technique: Making sense of consensus. Practical Assessment, Research and Evaluation, 12(10)
  • Jacobs, J. M. (1996). Essential assessment criteria for physical education teacher education programs: A Delphi study. Unpublished doctoral dissertation, West Virginia University, Morgantown
  • Jones J. & Hunter, D. (1995). Qualitative research: Consensus methods for medical and health services research. British Medical Journal, 311, 376–380
  • Joseph, C., Hendricks, C., & Frantz, J. (2011). Exploring the Key Performance Areas and Assessment Criteria for the Evaluation of Students’ Clinical Performance: A Delphi study. South African Journal of Physiotherapy, 67(2), 1-7
  • Judd, R. C. (1972). Use of Delphi methods in higher education. Technological Forecasting and Social Change, 4 (2), 173-186
  • Kaplan, L. M. (1971). The use of the Delphi method in organizational communication: A case study. Unpublished master’s thesis, The Ohio State University, Columbus
  • Miller, L. E. (2006, October). Determining what could/should be: The Delphi technique and its application. Paper presented at the meeting of the 2006 annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio
  • Murphy M.K., Black N., Lamping D.L., McKee C.M., Sanderson C.F.B., Askham J. et al. (1998) Consensus development methods and their use in clinical guideline development. Health Technology Assessment 2(3)
  • Oh, K. H. (1974). Forecasting through hierarchical Delphi. Unpublished doctoral dissertation, The Ohio State University, Columbus
  • Pill, J. (1971). The Delphi method: Substance, context, a critique and an annotated bibliography. Socio-Economic Planning Science, 5, 57-71
  • Powell, C. (2003). The Delphi technique: myths and realities. Journal of advanced nursing, 41(4), 376-82
  • Skulmoski, G. J., & Hartman, F. T. (2007). The Delphi Method for Graduate Research. Journal of Information Technology Education, 6

Jan Herrington’s model of Authentic learning

A few days ago I met with my supervisor  to discuss my research plan for the year. She suggested I look into Jan Herrington’s work on authentic learning so I thought I’d make some notes here as I familiarize myself with it.

To begin with, there are 9 elements of authentic learning (I believe that in designing our blended module we’ve managed to cover most of these elements. I’ll write that process up another time):

  1. Provide authentic contexts that reflect the way the knowledge will be used in real life
  2. Provide authentic tasks and activities
  3. Provide access to expert performances and the modelling of processes
  4. Provide multiple roles and perspectives
  5. Support collaborative construction of knowledge
  6. Promote reflection to enable abstractions to be formed
  7. Promote articulation to enable tacit knowledge to be made explicit
  8. Provide coaching and scaffolding by the teacher at critical times
  9. Provide for authentic assessment of learning within the tasks

The above elements are non-sequential.

“Authentic activities” don’t necessarily mean “real”, as in constructed in the real-world (e.g. internship), only that they are realistic tasks that enable students to behave as they would in the real-world.

Here are 10 characteristics of authentic activities (Reeves, Herrington & Oliver, 2002). Again, I believe that we’ve designed learning activities and tasks that conform – in general – to these principles. It’s affirming to see that our design choices are being validated as we move forward. In short, authentic tasks:

  1. Have real-world relevance i.e. they match real-world tasks
  2. Are ill-defined (students must define tasks and sub-tasks in order to complete the activity) i.e. there are multiple interpretations of both the problem and the solution
  3. Are complex and must be explored over a sustained period of time i.e. days, weeks and months, rather than minutes or hours
  4. Provide opportunities to examine the task from different perspectives, using a variety of resources i.e. there isn’t a single answer that is the “best” one. Multiple resources requires that students differentiate between relevant / irrelevant information
  5. Provide opportunities to collaborate should be inherent i.e. are integral to the task
  6. Provide opportunities to reflect i.e. students must be able to make choices and reflect on those choices
  7. Must be integrated and applied across different subject areas and lead beyond domain-specific outcomes i.e. they encourage interdisciplinary perspectives and enable diverse roles and expertise
  8. Seamlessly integrated with assessment i.e. the assessment tasks reflect real-world assessment, rather than separate assessment removed from the task
  9. Result in a finished product, rather than as preparation for something else
  10. Allow for competing solutions and diversity of outcome i.e. the outcomes can have multiple solutions that are original, rather than a single “correct” response

Design principles for authentic e-learning (Herrington, 2006)

“Authentic learning” places the task as the central focus for authentic activity, and is grounded in part in the situated cognition model (Brown et al, 1989) i.e. meaningful learning will only occur when it happens in the social and physical context in which it is to be used.

“How can situated theories be operationalized?” (Brown & Duguid, 1993, 10). Herrington (2006) suggests that the “9 elements” framework can be used to design online, technology-based learning environments based on theories of situated learning.

The most successful online learning environments:

  • Emphasised education as a process, rather than a product
  • Did not seek to provide real experiences but to provide a “cognitive realism”
  • Accept the need to assist students to develop in a completely new way

There is a tendency when using online learning environments to focus on the information processing features of computers and the internet. There is rarely an understanding of the complex nature of learning in unfamiliar contexts in which tasks are “ill-defined”.

The “physical fidelity” (how real it is) of the material is less important than the extent to which the activity promotes “realistic problem-solving processes” i.e. it’s cognitive realism. “The physical reality of the learning situation is of less importance that the characteristics of the task design, and the engagement of students in the learning environment” (Herrington, Oliver, & Reeves, 2003a).

Learners may need to be assisted in coming to terms with the fact that the simulated reality of their task is in fact, an authentic learning environment. It may call for their “willing suspension of disbelief” (Herrington, 2006).

There is a need for design-based research into the efficacy of authentic learning to better understand the affordances and challenges of the approach.

An instructional design framework for authentic learning environments (Herrington & Oliver, 2000)
One of the difficulties with higher education is teaching concepts, etc. in a decontextualised situation, and then expecting the students / graduates to apply what they’ve learned in another situation. This is probably one of the biggest challenges in clinical education, with people being “unable to access relevant knowledge for solving problems”

“Information is stored as facts, rather than as tools (Bransford, Sherwood, Hasselbring, Kinzer & Williams, 1990). When knowledge and context are separated, knowledge is seen by learners as a product of education, rather than a tool to be used within dynamic, real-world situations. Situated learning is a model that encourages the learning of knowledge in contexts that reflect the way in which the knowledge is to be used (Collins, 1988).

Useful tables and checklists on pg. 4-6 and pg. 8-10 of Herrington & Oliver, 2000. An instructional design framework for authentic learning environments
An “ill-defined” problem isn’t prescriptive, lacks boundaries, doesn’t provide guiding questions and doesn’t break the global task into sub-tasks. Students are expected to figure out those components on their own. We’re beginning by providing boundaries and structure. As we move through subsequent cases, the facilitators will withdraw structure and guidance, until by the end of the module, students are setting their own, personal objectives. Students should define the pathway and the steps they need to take.

Situated learning seems to be an effective teaching model with trying to guide the learning of an appropriately complex task i.e. advanced knowledge acquisition

Students benefit from the opportunity to articulate, scaffold and reflect on activities with a partner. When these opportunities are not explicitly described, students may seek it covertly.

Students often perceive a void between theory and practice, viewing theory as relatively unimportant (jumping through hoops, in the case of our students…busy-work with no real benefit other than passing theory exams) and the practical component as all-important. They appreciate the blurring of boundaries between the two domains.

The authentic activity should present a new situation for which the students have no answer, nor for which they have a set of procedures for obtaining an answer i.e. it should be complex and the solution uncertain.

Herrington & Reeves (2003). Patterns of engagement in authentic online learning environments

There seems to be an initial reluctance to immerse oneself in the online learning environment, possibly owing to the lack of realism from contexts that are not perfect simulations of the real-world. Students may need to be encouraged to suspend their disbelief  (pg. 2). They must agree to go along with an interpretation of the world that has been created.
Once the student has accepted the presented interpretation of the world, it is only internal inconsistency that causes dissonance. Other challenges occur when students perceive the environment as being non-academic, non-rigorous, a waste of time, and unnecessary for effective learning (which may well be the case if they perceive “effective learning” as sitting passively in a classroom trying to memorise content)
Be aware that the designer of the online space may present an interpretation of the world that is not shared with everyone i.e. it is one person’s view of what the real world is like.
A willing suspension of disbelief can be likened to engagement: “…when we are able to give ourselves over to a representational action, comfortably and unambiguously. It involves a kind of complexity” (Laurel, 1993, 115). It isn’t necessary to try and perfectly simulate the real-world, only that the representation is close enough to get students engaged e.g. the quality / realism  of images doesn’t have to be perfect, as long as it enables students to get the idea.
Many students find the shift to a new learning paradigm uncomfortable. If students are not self-motivated, if they are accustomed to teacher-centred modes of instruction and if they dislike the lack of direct supervision, they may resist. They may also be uncomfortable with the increased freedom they have i.e. there is less teacher-specified content, fewer teacher-constructed objectives, and almost no teacher-led activities. On some occasions, students may feel that they are not being taught, and may express this with anger and frustration.
The facilitator is vital in terms of presenting the representation in a way that encourages engagement, much like an actor in a play must convince the audience that what is happening on the stage is “real”. Without that acceptance, you would not enjoy the play, just as the student won’t perceive the value of the learning experience.
Students need to be given the time and space to make mistakes. They will begin by working inefficiently, but the expectation is that efficiency increases over time.
We need to “humanise” the online learning experience with compassion, empathy and open-mindedness.

References

  • Bransford, J.D., Sherwood, R.D., Hasselbring, T.S., Kinzer, C.K., & Williams, S.M. (1990). Anchored instruction: Why we need it and how technology can help. In D. Nix & R. Spiro (Eds.), Cognition, education and multimedia: Exploring ideas in high technology (pp. 115-141). Hillsdale, NJ: Lawrence Erlbaum
  • Brown, J.S., & Duguid, P. (1993). Stolen knowledge. Educational Technology, 33(3), 10-15
  • Brown, J.S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42
  • Collins, A. (1988). Cognitive apprenticeship and instructional technology (Technical Report 6899): BBN Labs Inc., Cambridge, MA
  • Herrington, J. (2006). Authentic e-learning in higher education: Design principles for authentic learning environments and tasks, World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Chesapeake, Va
  • Herrington, J., & Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48
  • Herrington, J., Oliver, R., & Reeves, T.C. (2003a). ‘Cognitive realism’ in online authentic learning environments. In D. Lassner & C. McNaught (Eds.), EdMedia World Conference on Educational
  • Herrington, J., & Reeves, T. C. (2003). Patterns of engagement in authentic online learning environments. Australian Journal of Educational Technology, 19(1), 59-71
  • Laurel, B. (1993). Computers as theatre. Reading, MA: Addison-Wesley
  • Reeves, T. C., Herrington, J., & Oliver, R. (2002). Authentic activities and online learning. HERDSA (pp. 562-567)