Twitter Weekly Updates for 2012-04-09

The Delphi method in clinical research

Thank you to Conran Joseph for his contribution to this post. We began developing this content as part of another project that we’re working on (more to come on that later) and then extended it as I made notes for a paper than I’m writing for my PhD.

Introduction
The Delphi method was developed in the 1950’s with the purpose of soliciting expert opinion in order to reach consensus  (Dalkey & Helmer, 1963, p. 458). It was so named because it was originally developed as a systematic, interactive means of forecasting or prediction, much like ancient Grecians came to the Oracle at Delphi to hear of their fortunes. The approach relies on a collection of opinions from a panel of experts in a domain of real-world knowledge, and aggregates those decisions to reach consensus around a topic. It is different from traditional surveys in that it is an attempt to identify what could, or should be, as opposed to what is (Miller, 2006).

Delphi studies are generally used to (Delbecq, Van de Ven & Gustafson, 1975, pg. 11):

  • Determine or develop a range of possible program alternatives
  • Explore or expose underlying assumptions or information leading to different judgments
  • Seek out information which may generate a consensus on the part of the respondent group
  • Correlate informed judgments on a topic spanning a wide range of disciplines
  • Educate the respondent group as to the diverse and interrelated aspects of the topic

Some of the other key features in Delphi survey research is that the participants are unknown to each other and that the process is iterative, with each subsequent round being derived from the results of the previous one. In other words, each participant receives a summary of the range of opinions from the previous round, and is given an opportunity to reassess their own opinions based on the feedback of other panelists. This controlled feedback helps to reduce the effect of noise, defined as communication which distorts the data as it relates to individual interests and bias, rather than problem solving. The feedback occurs in the form of a summary of the prior iteration, distributed to the panel as an opportunity to generate additional insights and clarify what was captured in the previous iteration (Dalkey, 1972). In addition, participants need not be geographically collocated (i.e. can be physically dispersed). This provides some level of anonymity, which also serves to reduce the effect of dominant individuals and group pressure to conform.

Within the context of clinical education, Delphi studies have been used to develop assessment practices that are not always easy to define. The modifiable behaviours and clinical competence that clinical educators are interested in are not particularly the concepts and skills covered in the classroom, but rather their application in practice. Assessment of the knowledge and skills required for competent practice usually takes the form of a sampling of a small subset of the total possible range of items, since it isn’t feasible to assess all possible combinations. In addition, not all clinicians agree on what the most important components of practice and assessment are. The Delphi method is therefore an appropriate methodological approach that can be used to gain consensus around the critical issue of what to assess, how it should be assessed and what strategies can be used to improve practice. Delphi studies have been used in healthcare for the planning of services, analysis of professional characteristics and competencies, assessment tool design and curriculum (Cross, 1999; Powell, 2003; Joseph, Hendricks & Frantz, 2011).

Designing a Delphi study
The most important aspect of your Delphi study will be participant selection, as this will directly influence the quality of the results you obtain (Judd, 1972; Taylor & Judd, 1989; Jacobs, 1996). Participants who are selected to participate in a Delphi survey are usually experts in the field, and should provide valuable input to improve the understanding of problems, opportunities and solutions. Having said that, there is no standard description of who should be included in the panel, nor what an “expert” is (Kaplan, 1971). Although there are no set criteria that one can use to select the panel, eligible participants should come from related backgrounds and experiences within the domain, are capable of making helpful contributions, and be open to adapting their opinion for the purpose of achieving consensus. It is not enough for participants to simply be knowledgeable in the domain being explored (Pill, 1971; Oh, 1974). While it is recommended that general Delphi studies use a heterogeneous panel (Delbecq, et al., 1975), Jones and Hunter (1995) suggest that domain specialists be used in clinical studies. Owing to the factors highlighted above, it is essential to establish the credibility of the panel, in order to support the claim that they are indeed experts in the field.

The next aspect to consider is the panel size. This is often dependent on the scope of the problem and the number of knowledgeable informants / experts who are available to you, and there is no agreement in the literature on what size is optimal (Hsu & Sandford, 2007). Depending on the context, it may be that the more participants there are, the higher the degree of reliability of the aspects mentioned. However, it has been suggested that 10 to 15 participants could be sufficient if their background is homogeneous (Delbecq, Van de Ven & Gustafson, 1975).

The first round of questionnaires usually consists of open-ended questions that are used to gather specific information about an area of domain of knowledge, and serves as a cornerstone for subsequent rounds (Custer, Scarcella, & Stewart, 1999). It is acceptable for this questionnaire to be derived from the literature (Hsu & Sandford, 2007) and need not be tested for validity or reliability. The structuring of the questionnaires, types of questions and number of participants will determine the data analysis techniques that are used to reach consensus. While the process could theoretically continue indefinitely, there is some agreement that three rounds of surveys are usually sufficient to reach a conclusion.

Procedure
Typically, the results of the first round are often used to identify major themes emerging from the open-ended questions. Thereafter the responses are collated into questionnaires that will form the basis of the subsequent rounds. From the second round onwards the data is usually analysed quantitatively, using either a rank order or rating technique (this is usually dependent on larger sample sizes). The results are analysed in order to determine levels of agreement in the ranking order. Researchers caution that this level of agreement should be decided on before the commencement of the data collection and devise a plan of how the data will be analysed in order to have a clear cut-off point for inclusion and exclusion. The level of agreement is usually set at 75%, although this can be modified if agreement is not reached. In some cases, participants may also be asked to provide a rationale for their ranking decisions, especially when panelists provide opinions that lie outside the groups’ consensus for a domain or topic.

Procedure of running a Delphi study

  1. Determine your objectives. What is it that you want your panelists to achieve consensus on?
  2. Design your first set of questions using an extensive review of the available literature. Be sure to base this first round of questions on the objectives you wish to achieve.
  3. Test your questions for ambiguity, time, and appropriateness of responses. Send it out to a small sample of experts or at least colleagues and review their responses to ensure that your questions are useful in terms of achieving your objective.
  4. Send out the first round of the survey.
  5. Send a reminder for panelists to complete the first round, about 1-2 weeks after the initial survey was sent, although the actual time frames will depend on your study.
  6. Analyse the responses from round one, and use these results to design the survey for the second round.
  7. Test round two on a small sample of panelists, in order to make sure that the responses will provide the data you need.
  8. Send out the survey for the second round.
  9. Send a reminder for round two. Again the exact time will depend on your particular needs, and the context of your study.
  10. Analyse the responses from round two and use these results to design the survey for round three.
  11. Test the survey for the third round, and send it out when you are satisfied. Remind panelists to complete if necessary.
  12. Analyse the responses from the third round.
  13. Determine if your objectives have been achieved. Include additional rounds if you decide that you need more information.

Analysis of results
Quantitative analysis
The aspects to consider for the use of quantitative analysis are related to panel size and questionnaire design. Consequently this is often dependent on the scope of the problem and the number of knowledgeable informants/experts available to you. Some researchers believe that the more participants there are, the higher the degree of reliability of the aspects mentioned. The most widely used technique for gaining consensus in this paradigm is through obtaining an agreement level. Although controversy exists on the level or cut off point for agreement, numerous authors indicated a 75% of agreement as an appropriate level. Apart from obtaining a level of agreement other rating techniques are also commonly used to reach consensus. Some of these rating techniques include the of ranking elements in order of importance and calculating the mean to identify the most important to the least important elements. Also, likert -type scales are used to determine whether element should be included or not. Thus, the the nature of the analysis will depend strongly on the structuring of the questionnaires, types of questions and number of participants.

Qualitative analysis
A qualitative Delphi study does not rely on statistical measures to establish consensus among participants. Rather, it is the analysis of emergent themes (provided no structure was initially provided) that gives rise to the conclusion. The results from open-ended questions will usually be in the form of short narratives, which should be analysed using qualitative techniques. The researcher will review the responses and categorise them into emerging themes. This process will continue until saturation is reached i.e. until no new information or themes arise. These themes can then either be used to form the basis of the next round of questions (as in an exploratory or development Delphi study), or they can be used to derive a list of items that panelists can rank.

Advantages and disadvantages of using the Delphi method
Whereas in committees and face-to-face meetings, dominant individuals may monopolise the direction of the conversation, the Delphi method prevents this by placing all responses on an “equal” footing. Anonymity also means that participants should only take into account the information before them, rather than the reputation of any particular speaker. Anonymity also allows for the expression of personal opinions, open critique, and admission of errors by giving opportunities to revise earlier judgments. In addition, the researcher is able to filter, summarise and discard irrelevant information, which may be distracting for participants in face-to-face meetings. Thus, potentially distracting group dynamics are removed from the equation (Hsu & Sandford, 2007).

One of the major disadvantages is that there is a high risk of both low response rate and attrition. In addition, a Delphi study typically takes up a lot of time, and adds significantly to the workload of the researcher. However, it is felt that the advantages of using a Delphi study in the right context adds value that is difficult to achieve with other methods.

Conclusion
The Delphi method is a useful means of establishing consensus around topics that have no set outcomes and which are open to debate. The credibility of the panel you select for your study is vital if you want to ensure the results are taken seriously.

References

  • Butterworth T. & Bishop V. (1995) Identifying the characteristics of optimum practice: findings from a survey of practice experts in nursing, midwifery and health visiting. Journal of Advanced Nursing 22, 24–32
  • Cross, V. (1999). The Same But Different: A Delphi study of clinicians’ and academics’ perceptions of physiotherapy undergraduates. Physiotherapy, 85(1), 28-39
  • Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The modified Delphi technique: A rotational modification. Journal of Vocational and Technical Education, 15 (2), 1-10
  • Dalkey, N. C. & Helmer, O. (1963). An experimental application of the Delphi Method to the use of experts. Management Science, 9(3), 458 – 468
  • Delbecq, A.L., Van de Ven, A.H. & Gustafson, D.H. (1975). Group Techniques for Program Planning: a guide to nominal group and Delphi processes
  • Hsu, C.-chien, & Sandford, B. (2007). The Delphi Technique: Making sense of consensus. Practical Assessment, Research and Evaluation, 12(10)
  • Jacobs, J. M. (1996). Essential assessment criteria for physical education teacher education programs: A Delphi study. Unpublished doctoral dissertation, West Virginia University, Morgantown
  • Jones J. & Hunter, D. (1995). Qualitative research: Consensus methods for medical and health services research. British Medical Journal, 311, 376–380
  • Joseph, C., Hendricks, C., & Frantz, J. (2011). Exploring the Key Performance Areas and Assessment Criteria for the Evaluation of Students’ Clinical Performance: A Delphi study. South African Journal of Physiotherapy, 67(2), 1-7
  • Judd, R. C. (1972). Use of Delphi methods in higher education. Technological Forecasting and Social Change, 4 (2), 173-186
  • Kaplan, L. M. (1971). The use of the Delphi method in organizational communication: A case study. Unpublished master’s thesis, The Ohio State University, Columbus
  • Miller, L. E. (2006, October). Determining what could/should be: The Delphi technique and its application. Paper presented at the meeting of the 2006 annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio
  • Murphy M.K., Black N., Lamping D.L., McKee C.M., Sanderson C.F.B., Askham J. et al. (1998) Consensus development methods and their use in clinical guideline development. Health Technology Assessment 2(3)
  • Oh, K. H. (1974). Forecasting through hierarchical Delphi. Unpublished doctoral dissertation, The Ohio State University, Columbus
  • Pill, J. (1971). The Delphi method: Substance, context, a critique and an annotated bibliography. Socio-Economic Planning Science, 5, 57-71
  • Powell, C. (2003). The Delphi technique: myths and realities. Journal of advanced nursing, 41(4), 376-82
  • Skulmoski, G. J., & Hartman, F. T. (2007). The Delphi Method for Graduate Research. Journal of Information Technology Education, 6

Twitter Weekly Updates for 2012-02-20

Developing case studies for holistic clinical education

This is quite a long post. Basically I’ve been trying to situate my current research into a larger curriculum development project and this post is just a reflection of my progress so far. It’s probably going to have big gaps and be unclear in sections. I’m OK with that.

Earlier this week our department had a short workshop on developing the cases that we’re going to use next year in one of our modules. We’re going to try and use cases to develop a set of skills and attitudes that are lacking in our students. These include challenges with (text in brackets are stereotypical student perspectives):

  • Problem solving and clinical reasoning (Tell me what the answer is so that I can memorise it)
  • Critical analysis (Everything I read has the same value)
  • Empathy (The patient is an object I use to develop technical skills)
  • Communication (The use of appropriate professional terminology isn’t important)
  • Groupwork (Assessment is a zero sum game…if you score more than me it bumps me down the ranking in the class, therefore I don’t help you)
  • Knowing vs Understanding (It’s more important for me to know the answer than to understand the problem)
  • Integration of knowledge into practice (What I learn in class is separate to what I do with patients)
  • Integration of knowledge from different domains (I can’t examine a patient with a respiratory problem because I’m on an orthopaedic rotation)
  • Poor understanding of the use of technology to facilitate learning (social networks are for socialising, not learning)

I know it might seem like a bit much to think that merely moving to case-based learning is going to address all of the above, but we think it’ll help to develop these areas in which the students are struggling. The results of my ongoing PhD research project will be helping in the development of this module in the following ways:

  • The survey I began with in 2009 has given us an idea of digital literacy skills of this population, as well as some of the ways in which they learn.
  • The systematic review has helped us to understand some of the benefits and challenges of a blended approach to clinical education.
  • The Delphi study (currently in the second round) has already identified many of the difficulties that our clinicians and clinical supervisors experience in terms of developing the professional and personal attributes of capable and competent students. This study will attempt to highlight teaching strategies that could help to develop the above mentioned problems.
  • I’ve also just finished developing and testing the data capture sheet that I’ll be using for a document analysis of the curriculum in order to determine alignment.
  • Later next year I’ll be conducting an evaluation of the new module, using a variety of methods.

All of the above information is being fed into the curriculum development process that we’re using to shift our teaching strategy from a top-down, didactic approach to a blended approach to teaching and learning. Development of the cases is one of the first major steps we’re taking as part of this curriculum development process. I’ll try to summarise how the cases are being developed and how they’ll be used in the module. This module is called “Applied Physiotherapy” and it’s basically where students learn about the physiotherapy management of common conditions.

In the past, these conditions were divided into systems and taught within those categories e.g. all orthopaedic conditions were covered together. The problem is that this effectively silo’s the information and students see little crossover. In fact, reality is very rarely so conveniently categorised. Patients with orthopaedic conditions may develop respiratory complications as a result of prolonged bed rest. Patients with TB often also present with peripheral neuropathy, as a result of the association of TB with HIV. So, the purpose of the cases is also to integrate different conditions to help students understand the complexity of real-world cases.

In the first term we’ll use 2 very simple cases that each run for 3 weeks. The reason that the cases are simple is that we’re also going to be introducing many new ideas that the students may have little experience with and which are important for participation in the cases e.g. computer workshops for the online environment, concept mapping, group dynamics, presentation skills, etc. The cases will increase in complexity over time as the students feel more comfortable with the process.

Each case will have an overview that highlights the main concepts, learning outcomes, teaching activities, assessment tasks and evaluation components that the case encompasses. The case will be broken up into parts, the number of which will depend on the duration and complexity of the case. After the presentation of each part, the students (in their small groups) will go through this process:

  • What do I know that will help me to solve this problem?
  • What do I think I know that I’m uncertain of?
  • What don’t I know that I need to learn more about?

These questions should help the students develop a coherent understanding of the knowledge they already have that they can build on, as well as the gaps in understanding that they need to fill before they can move on with the case. Each part will involve students allocating tasks that need to be completed before the next session and role allocation is done by each group prior to the introduction of the case. During this process, facilitators will be present within the groups in order to make sure that students have not left out important concepts e.g. precautions and contraindications of conditions.

At the next session, each member of the small groups present to each other within the small groups. The purpose of this is to consolidate what has been learned, clarify important concepts and identify how they’re going to move forward. At the end of each week each small group presents to the larger group. This gives them the opportunity to evaluate their own work in relation to the work of others, make sure that all of the major concepts are included in their case notes, as well as opportunities to learn and practice presentation skills. Students will also be expected to evaluate other groups’ work.

There will be a significant online component to the cases in the form of a social network built on WordPress and Buddypress. We will begin by providing students with appropriate sources that they can consult at each stage of the process. Over time we’ll help them develop skills in the critical analysis of sources so that they begin to identify credibility and authority and choose their own sources. They will also use the social network for collaborative groupwork, communication, and the sharing of resources.

Finally, here are some of the tasks we’re going to include as part of the cases, as well as the outcomes they’re going to measure (I’ve left out citations because this has been a long post and I’m tired, but all of these are backed by research):

  • Concept mapping – determine students’ understanding of the relationships between complex concepts
  • Poetry analysis – development of personal and professional values e.g. compassion, empathy
  • Reflective blogging – development of self-awareness, critical evaluation of their own understanding, behaviours and professional practices
  • Peer evaluation – critical analysis of own and others’ work
  • Case notes – development of documentation skills
  • Presentations – ability to choose important ideas and convey them concisely using appropriate language

This is about where we are at the moment. During the next few months we’ll refine these ideas, as well as the cases, and begin with implementation next year. During my evaluation of the module, I’ll be using the results of the student tasks listed above, as well as interviews and focus groups with students and staff. We’ll review the process in June and make changes based on the results of my, and 2 other, research projects that will be running. We want the curriculum to be responsive to student needs and so we need to build in the flexibility that this requires.

After reading through this post, I think that what I’m saying is that this forms a basic outline of how we’re defining “blended learning” for this particular module. If you’ve managed to make it this far and can see any gaping holes, I’d love to hear your suggestions on how we can improve our approach.

Results of my Delphi first round

I’ve recently finished the analysis of the first round of the Delphi study that I’m conducting as part of my PhD. The aim of the study is to determine the personal and professional attributes that determine patient outcomes, as well as the challenges faced in clinical education. These results will serve to inform the development of the next round, in which clinical educators will suggest teaching strategies that could be used to develop these attributes, and overcome the challenges.

Participants from the first round had a wide range of clinical, supervision and teaching experience, as well as varied domain expertise. Several themes were identified, which are summarised below.

In terms of the knowledge and skills required of competent and capable therapists, respondents highlighted the following:

  • They must have a wide range of technical and interpersonal skills, as well as a good knowledge base, and be prepared to continually develop in this area.
  • Professionalism, clinical reasoning, critical analysis and understanding were all identified as being important, but responses contained little else to further explain what these concepts mean to them.

In terms of the personal and professional attributes and attitudes that impact on patient care and outcomes, respondents reported:

  • A diverse range of personal values that they believe have relevance in terms of patient care
  • These values were often expressed in terms of a relationship, either between teachers and students, or between students and patients
  • Emotional awareness (of self and others) was highlighted

In terms of the challenges that students face throughout their training:

  • Fear and anxiety, possibly as a result of poor confidence and a lack of knowledge and skills, leading to insecurity, confusion and uncertainty
  • Lack of self-awareness as it relates to their capacity to make effective clinical decisions and reason their way through problems
  • A disconnect between merely “providing a service” and “serving”
  • They lack positive and supportive clinical learning environments, have poor role models and often aren’t given the time necessary to reflect on their experiences
  • The clinical setting is complex and dynamic, a fact that students struggle with, especially when it comes to dealing with complexity and uncertainty inherent in clinical practice
  • Students often “silo” knowledge and skills, and struggle to transfer between different contexts
  • Students struggle with the “hidden culture” of the professional i.e. the language, values and norms that clinicians take for granted

These results are not significantly different from the literature in terms of the professional and personal attributes that healthcare professionals deem to be important for patient outcomes.

The second round of the Delphi is currently underway and will focus on the teaching  strategies that could potentially be used to develop the attitudes and attributes highlighted in the first round.

Twitter Weekly Updates for 2011-11-21

  • Papert: “…the practice of segregating children by age into “grades” will be seen as…old-fashioned, and inhumane” http://t.co/pvXVRayG #
  • Great way to learn physics http://t.co/oNRel2Qm #
  • Scientists invent lightest material on Earth. What now? http://t.co/i1BF632n via @zite #
  • The Top 10+1 apps in the Mendeley-PLoS Binary Battle! http://t.co/oVT6cva8 via @zite #
  • Dave Cormier: Explaining Rhizomatic Learning to my five year old. http://t.co/R7Pjrdez via @zite #
  • Microsoft’s table-sized tablet Surfaces for pre-order http://t.co/UDCeAq7D via @zite? Cool health-related concept image at the end #
  • @mendeley_com I love the ipad app but hate that I can’t annotate / highlight text. Any plans for that functionality in the lite version? #
  • How odd that #Mendeley isn’t @mendeley. Made an assumption earlier today with a tweet (embarrassed face) #
  • The really basic skill today is the skill of learning http://t.co/yzfBZHcx #
  • @mendeley I love the ipad app but hate that I can’t annotate / highlight text. Any plans for that functionality in the lite version? #
  • ECAR National Study of Undergraduate Students and Information Technology, 2011 Report | EDUCAUSE http://t.co/luaja5Pl #
  • Just published the 2nd round of my survey on clinical education. If you teach healthcare students, please respond at http://t.co/mIm3l9H8 #
  • @whataboutrob Could probably make that work 🙂 #

Graphically representing a curriculum

Schematic map of the Milky Way

I’ve been a bit quiet on the blog lately, owing to the fact that I’ve been putting a lot of time into the next phase of my PhD. This post is in part an attempt to summarise and try to make sense of what’s going on there, as well as to assuage my feeling of guilt at not having posted for a while.

In terms of my research progress I’m currently running a Delphi study among clinicians and clinical educators, as well as a document analysis of the curriculum. The Delphi is trying to identify the personal and professional attributes that clinicians believe are important in terms of positively impacting patient outcomes, the relevant teaching activities that could be used to develop and assess these attributes, and any appropriate technologies that might facilitate the above teaching and learning activities.

I’m busy with the second round of the Delphi study (I’ll post the main results of the first shortly) and will begin analysing the curriculum documentation soon. The combination of these two projects will (hopefully) give me enough data to determine how we need to change the curriculum in order to better develop the attributes we’ve identified.

As part of that process I’m starting to look at curriculum mapping. What I’m struggling with at the moment is to figure out how best to represent what I’m learning as far as what the curriculum looks currently like, and how we need to change it. These are the difficulties I’ve come up with:

  • The learning process isn’t linear, which cuts out a narrative representation
  • A curriculum is organised by many things e.g. outcomes, content, teaching approach, assessment tasks, time, space, etc. How do you emphasise all of these (and their relationships) while keeping some measure of sanity?
  • There are many interrelated concepts i.e. multiple connections, nested connections, linear and non-linear components, etc. all of which makes a mindmap difficult to work with (mindmaps are usually hierarchical, and a curriculum presented as a hierarchy would be necessarily simplistic)
  • A Gantt chart might be useful to show how activities or projects progress over time, but it doesn’t have much scope for depth
  • Tabular representation doesn’t allow you to expand / collapse sections, or add detailed notes. It also allows only very simple, one-to-one connections e.g. content over time but not time, content and outcomes.
  • At the moment I seem to have settled on CmapTools for concept mapping. It’s not the ideal solution but it seems to be the one that enables most of what I need (see list below)

As much as I’ve read around curriculum mapping I haven’t yet found a solution that helps me to address everything that I think I need. I know that I probably won’t be able to find a tool that enables all of the following, but this is what I’d like to be able to do:

  • Create relationships between concepts e.g. outcomes, teaching activity, assessment task, etc.
  • Emphasise the nature of the relationships
  • Annotate concepts and relationships
  • Expand and collapse sections i.e. see the big picture (e.g. national exit level outcomes) as well as drill-down into the details (e.g. lesson plans)
  • I should be able to show a process over time i.e. workflow should be built in
  • I’d like the ability to input more data over time, and delete outdated content
  • I’d like to be able to detect redundancy, inconsistency and omissions (of content, tasks, outcomes, etc.)
  • It’d be great if it was collaborative
  • Must be able to review vertical (subjects between years) and horizontal (between subjects in the same year) alignment, as well as the sequencing of activities
  • Define a shared vocabulary for use in our department (we often use different terms for the same thing, creating confusion)

I’ve also been also looking into other domains for ideas that will help me to get a better understanding of graphical modelling to represent complex information. One example is Unified Modelling Language (UML), a general purpose modelling language that is used to represent the various facets of objects and systems in computer science. It is used to “…specify, visualize, modify, construct and document the artefacts of…a system”. It also offers a standard way to visualise the different elements of that system e.g. activities, actors, processes, components, etc. I’m still holding out for a modelling tool from another domain (besides education) that might serve my purposes.

During the above-mentioned process, I also had fun looking at a curriculum as a computer platform. A computer platform includes:

  • The operating system (OS), which is basically a set of instructions for what to do in certain situations, including task scheduling and resource allocation. I think that this is a useful way to think about the structure of a curriculum i.e. what should happen, when it should happen, who is responsible for it, etc.
  • Architecture (hardware) that includes the CPU, data bus, chipsets, graphics cards, motherboard, sound card. CPU is concerned with how programmes access memory. The physical structures that enable the manifestation of the curriculum.
  • Frameworks are collections of software libraries that contain generic functionality that can be modified by within certain constraints. Frameworks allow developers to spend time working on useful features rather than having to write code for low level functionality. Within the curriculum there are modules that share generic features e.g. problem solving. A way of assessing whether or not a student can solve problems is a generic “framework” that can be modified slightly to be used in other modules. Why should every lecturer have to re-create the same libraries of tools in order to assess the same thing in a different context?
  • Programming languages that use a standardised set of vocabulary and grammar to create a set of instructions that the OS will understand.
  • The user interface (UI) that allows a user to interact with the computer and its peripherals. This is the most visible part of the platform, and often the part that draws the most attention. This is the part of the curriculum that everyone can see. The handouts, the lecture, the assessment tasks i.e. this is what the students and lecturers  use to interact with the curriculum. Is is also the part that people will love or hate. No matter how “good” the underlying structure is, the student engages with the UI and most people in higher education haven’t caught onto the idea that “pretty is a feature“.

Schematic transit maps and Venn diagrams might also be useful in terms of thinking about curriculum mapping in a different way. I’m inclined to think that a combination of all of the above will be an interesting experiment.

I guess the biggest issue I’m having is trying to figure out a way to show how we can go from what we have to what we want, from a very high to very low level. It’s harder than I thought it’d be…

Twitter Weekly Updates for 2011-10-24

  • Daily Papert http://t.co/IzTvBxZk. What is the role of the teacher in society? #
  • Nudity, Pets, Babies, and Other Adventures in Synchronous Online Learning http://t.co/pRyPVvzU #
  • If you are a clinician who supervises or teaches healthcare students, please consider completing my survey at http://t.co/x1MXf3AJ #
  • The hierarchical structure of an ePortfolio http://t.co/65gIpn5Y. If your e-portfolio is structured hierarchically, you’re doing it wrong #
  • @mpascoe if they don’t perceive that the class has value, then it doesn’t, at least not for them. Forced attendance won’t change that #
  • Don’t offer students grades in return for attendance in your classes. Just be interesting #
  • @suhaifa hey Su, it was a great day to be out and about. Glad that you and @jacquesmillard could make it #