Critical digital pedagogy in the classroom: Practical implementation

Update (12-02-18): You can now download the full chapter here (A critical pedagogy for online learning in physiotherapy education) and the edited collection here.

This post is inspired by the work I’ve recently done for a book chapter, as well as several articles on Hybrid Pedagogy but in particular, Adam Heidebrink-Bruno’s Syllabus as Manifesto. I’ve been wanting to make some changes to my Professional Ethics module for a while and the past few weeks have really given me a lot to think about. Critical pedagogy is an approach to teaching and learning that not only puts the student at the centre of the classroom but then helps them to figure out what to do now that they’re there. It also pushes teachers to go beyond the default configurations of classroom spaces. Critical digital pedagogy is when we use technology to do things that are difficult or impossible in those spaces without it.

One of the first things we do in each module we teach is provide students with a course overview, or syllabus. We don’t even think about it but this document might be the first bit of insight into how we define the space we’re going to occupy with our students. How much thought do we really give to the language and structure of the document? How much of it is informed by the students’ voice? I wondered what my own syllabus would look like if I took to heart Jesse Stommel’s suggestion that we “begin by trusting students”.

I wanted to find out more about where my students come from, so I created a shared Google Doc with a very basic outline of what information needed to be included in a syllabus. I asked them to begin by anonymously sharing something about themselves that they hadn’t shared with anyone else in the class before. Something that influenced who they are and how they came to be in that class. I took what they shared, edited it and created the Preamble to our course outline, describing our group and our context. I also added my own background to the document, sharing my own values, beliefs and background, as well as positioning myself and my biases up front. I wanted to let them know that, as I ask them to share something of themselves, so will I do the same.

The next thing were the learning outcomes for the modules. We say that we want our students to take responsibility for their learning but we set up the entire programme without any input from them. We decide what they will learn based on the outcomes we define, as well as how it will be assessed. So for this syllabus I included the outcomes that we have to have and then I asked the students to each define what “success” looks like in this module for them. Each student described what they wanted to achieve by the end of the year, wrote it as a learning outcome, decided on the indicators of progress they needed, and then set timelines for completion. So each of them would have the learning outcomes that the institution and professional body requires, plus one. I think that this goes some way toward acknowledging the unique context of each student, and also gives them skills in evaluating their own development towards goals that they set that are personally meaningful.

I’ve also decided that the students will decide their own marks for these personal outcomes. At the end of the year they will evaluate their progress against the performance indicators that they have defined, and give themselves a grade that will count 10% towards their Continuous Assessment mark. This decision was inspired by this post on contract grading from HASTAC. What I’m doing isn’t exactly the same thing but it’s a similar concept in that students not only define what is important to them, but decide on the grade they earn. I’m not 100% how this will work in practice, but I’m leaning towards a shared document where students will do peer review on each other’s outcomes and progress. I’m interested to see what a student-led, student-graded, student-taught learning outcome looks like.

Something that is usually pretty concrete in any course is the content. But many concepts can actually be taught in a wide variety of ways and we just choose the ones that we’re most familiar with. For example the concept of justice (fairness) could be discussed using a history of the profession, resource allocation for patients, Apartheid in South Africa, public and private health systems, and so on. In the same shared document I asked students to suggest topics they’d like to cover in the module. I asked them to suggest the things that interest them, and I’d figure out how to teach concepts from professional ethics in those contexts. This is what they added: Income inequality. Segregation. #FeesMustFall. Can ethics be taught? The death penalty. Institutional racism. Losing a patient. That’s a pretty good range of topics that will enable me to cover quite a bit of the work in the module. It’s also more likely that students will engage considering that these are the things they’ve identified as being interesting.

Another area that we have completely covered as teachers is assessment. We decide what will be assessed, when the assessment happens, how it is graded, what formats we’ll accept…we even go so far as to tell students where to put the full stops and commas in their referencing lists. That’s a pretty deep level of control we’re exerting. I’ve been using a portfolio for assessment in this module for a few years so I’m at a point where I’m comfortable with students submitting a variety of different pieces. What I’m doing differently this year is asking the students to submit each task when it’s ready rather than for some arbitrary deadline. They get to choose when it suits them to do the work, but I have asked them to be reasonable with this, mainly because if I’m going to give them decent feedback I need time before their next piece arrives. If they’re submitted all at once, there’s no time to use the feedback to improve their next submission.

The students then decided what our “rules of engagement” would be in the classroom. Our module guides usually have some kind of prescription about what behaviour is expected, so I asked the students what they thought appropriate behaviour looks like and then to commit as a class to those rules. Unsurprisingly, their suggestions looked a lot like it would have if I had written it myself. Then I asked them to decide how to address situations when individuals contravened our rules. I don’t want to be the policeman who has to discipline students…what would it look like if students decided in advance what would work in their classroom, and then took action when necessary? I’m pretty excited to find out.

I decided that there would be no notes provided for this module, and no textbook either. I prepare the lecture outline in a shared Google document, including whatever writing assignments the students need to work on and links to open access resources that are relevant for the topic. The students take notes collaboratively in the document, which I review afterwards. I add comments and structure to their notes, and point them to additional resources. Together, we will come up with something unique describing our time together. Even if the topic is static our conversations never are, so any set of notes that focuses only on the topic is going to necessarily leave out the sometimes wonderful discussion that happens in class. This way, the students get the main ideas that are covered, but we also capture the conversation, which I can supplement afterwards.

Finally, I’ve set up a module evaluation form that is open for comment immediately and committed to having it stay open for the duration of the year. The problem with module evaluations is that we ask students to complete them at the end of the year, when they’re finished and have no opportunity to benefit from their suggestions. I wouldn’t fill it in either. This way, students get to evaluate me and the module at any time, and I get feedback that I can act on immediately. I use a simple Google Form that they can access quickly and easily, with a couple of rating scales and an option to add an open-ended comment. I’m hoping that this ongoing evaluation option in a format that is convenient for students means that they will make use of it to improve our time together.

As we worked through the document I could see students really struggling with the idea that they were being asked to contribute to the structure of the module. Even as they commented on each other’s suggestions for the module, there was an uncertainty there. It took a while for them to be comfortable saying what they wanted. Not just contributing with their physical presence in the classroom, but to really contribute in designing the module; how it would be run, how they would be assessed, how they could “be” in the classroom. I’m not sure how this is going to work out but I felt a level of enthusiasm and energy that I haven’t felt before. I felt a glimmer of something real as they started to take seriously my offer to take them seriously.

The choices above demonstrate a few very powerful additions to the other ways that we integrate technology into this module (the students portfolios are all on the IEP blog, they do collaborative authoring and peer review in Google Drive, course resources are shared in Drive, they do digital stories for one of the portfolio submissions, and occasionally we use Twitter for sharing interesting stories). It makes it very clear to the students that this is their classroom and their learning experiences. I’m a facilitator but they get to make real choices that have a real impact in the world. They get to understand and get a sense of what it feels like to have power and authority, as well as the responsibility that comes with that.

Are we gatekeepers, or locksmiths?

David Nicholls at Critical Physiotherapy recently blogged about how we might think about access to physiotherapy education, and offers the metaphor of a gated community as one possibility.

The staff act as the guards at the gateway to the profession and the gate is a threshold across which students pass only when they have demonstrated the right to enter the community.

This got me thinking about the metaphors we use as academics, particularly those that guide how we think about our role as examiners. David’s post reminded of a conversation I had with a colleague soon after entering academia. I was working as an external clinical examiner for a local university and we were evaluating a 3rd year student who had not done very well in the clinical exam. We were talking about whether the student had demonstrated enough of an understanding of the management of the patient in order to pass. My colleague said that we shouldn’t feel bad about failing the student because “we are the gatekeepers for the profession”. The metaphor of gatekeeper didn’t feel right to me at the time and over the next few years I struggled with the idea that part of my job was to prevent students from progressing through the year levels. Don’t get me wrong, I’m not suggesting that we allow incompetent students to pass. My issue was with how we think about our roles as teachers and where the power to determine progression lies.

gatekeeper
I imagine that this is how many students think of their lecturers and clinical examiners: mysterious possessors of arcane, hidden knowledge.

A gatekeeper is someone who has power to make decisions that affect someone who does not. In this metaphor, the examiner is the gatekeeper who decides whether or not to allow a student to cross the threshold. Gate keeping is about control, and more specifically, controlling those who have less power. From the students’ perspective, the idea of examiner-as-gatekeeper moves the locus of control externally, rather than acknowledging that success is largely determined by one’s motivation. It is the difference between taking personal responsibility for not doing well, or blaming some outside factor for poor performance (“The test was too difficult; The examiner was too strict”; The patient was non-compliant”).

As long as we are the gatekeepers who control students’ progress through the degree, the locus of control exists outside of the student. They do the work and we either block them or allow them through. We have the power, not students. If they fail, it is because we failed them. It is far more powerful – and useful for learning – for students to take on the responsibility for their success or failure. To paraphrase from my PhD thesis:

If knowledge can exist in the spaces between people, objects and devices, then it exists in the relationships between them. [As lecturers, we should] encourage collaborative, rather than isolated activity, where the responsibility for learning is be shared with others in order to build trust. Facilitators must be active participants in completing the activities, while emphasising that students are partners in the process of teaching and learning, because by completing the learning activity together students are exposed to the tacit, hidden knowledge of the profession. In this way, lecturers are not authority figures who are external to the process of learning. Rather than being perceived as gatekeepers who determine progression through the degree by controlling students’ access to knowledge, lecturers can be seen as locksmiths, teaching students how to make their own keys, as and when it is necessary.

By thinking of lecturers (who are often also the examiners) as master locksmiths who teach students how to make their own keys, we are moving the locus of control back to the student. The gates that mark thresholds to higher levels of the profession still exist, as they should. It is right that students who are not ready for independent practice should be prevented from doing so. However, rather than thinking of the examiner as a gatekeeper who prevents the student from crossing the threshold, we could rather think of the student as being unable to make the right key. The examiner is then simply an observer who recognises the student’s inability to open the gate. It is the student who is responsible for poor performance and not the examiner who is responsible for failing the student.

I therefore suggest that the gatekeeper metaphor for examiners be replaced with that of a locksmith, where students are regarded as apprentices and novice practitioners who are learning a craft. From this perspective we can more carefully appreciate the interaction that is necessary in the teaching and learning relationship, as we guide students towards learning how to make their own keys as they control their own fate.

mwI4tl4

Caveat: if we are part of a master-apprentice relationship with students, then their failure must be seen as our failure too. If my student cannot successfully create the right key to get through the gate, I must faithfully interrogate my role in that failure, and I wonder how many of us would be comfortable with that.

Thanks to David, who posted Physiotherapy Education as a Gated Community, and for stimulating me to think more carefully about how the metaphors we use inform our thinking and our practice.

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ

Peer review of teaching

Introduction
Peer review is a form of evaluation designed to provide feedback to teachers about their professional practice. The standard method of evaluating teaching is to ask students at the end of a module or course, for their feedback on the lecturers performance. While student feedback does have value, it also has limitations. For example, students are often not qualified to determine a lecturers knowledge base or understanding of course content. They may also lack the skills to identify appropriate levels of difficulty of the assessment tasks, as well as the appropriateness of the learning objectives as they relate to the overall curriculum.

Peer review of your teaching practice should be performed over time on different occasions, by different colleagues. This will create a more reliable measure of your teaching practice, as it goes some way towards eliminating bias. Effective peer evaluation should incorporate input from multiple sources. These can include the peer review from colleagues, but should also integrate feedback from students, personal reflection (as might be obtained from a teaching portfolio), and a review of student work. Students in particular can provide input on their perceptions of the classroom instruction process, outside-classroom interactions, and their satisfaction with the lecturer’s ability to mentor them. This integration of input from various sources allows for a more comprehensive, holistic view of teaching practice.

Why you should consider using peer review
Peer review of your teaching practice has several benefits. These include;

  • The opportunity to learn from others’ perspectives
  • Being exposed to new ideas
  • New staff learning from more experienced colleagues

However, you should also be aware of some possible pitfalls. These include bias when the observer has beliefs about teaching practice that aren’t consistent with your own, and a lack of validity (when used summatively) if it is viewed as an independent indicator of teaching ability.

Peer review process guidelines
Now that you have a form that is relevant for your context, you need to implement it. Think of the process as a collaborative one, rather than an assessment. Before you begin, you should decide what class is going to be observed, and who will observe you. It may be difficult but try and choose someone who can help identify areas in which you can improve, rather than someone you feel safe with. It may be easier asking a friend but you may find that they don’t give you the objective evaluation you need to develop your practice. The same goes for the activity you choose. You may want to go with a module you’re comfortable with and that you know well, but this doesn’t allow you much chance to improve. Instead, try to use the process as opportunity to challenge yourself.

Before the activity begins, you should meet with the observer so that you can discuss what you will be doing during the session. The aim of this is to provide some context for them to work within. For example, you may discuss the goal of your session, specific objectives you wish to achieve, the teaching strategies you will use to achieve the objective, how you will measure this achievement, as well as any concerns you would like the observer to take note of. An activity outline that they can keep may help to remind them what you’re going to try and do during your teaching activity.

The observer should arrive 10 minutes before the class begins, as arriving late is to model poor behaviour to students. They should be briefly introduced to the students, and their role explained. Finally, the observer should not ask questions during the activity, as this may detract from the process and invalidate the outcome. If the activity will go on for more than an hour, decide beforehand which components the observer will stay for.

After the teaching activity you should de-brief with the observer. This can either be done immediately, or after a short period of reflection. The advantage of doing it immediately is that everything is fresh in everyone’s minds, but which doesn’t allow time for both parties to reflect on the process. Whether you choose to do it immediately or after a reflective period, this session is where the observer can report on their observations for further discussion. This session should be led by the person who was observed, rather than be in the form of post-mortem by the observer.

Following the discussion, it is essential to decide on a set of actions that you can take in order to move forward and use the review process as a means of improving your practice. Without setting objectives for improvement based on the feedback, and then taking action to achieve those objectives, there is little point in the peer review process. Finally, the completed evaluation form, professional development objectives, and plan of action should be archived in your teaching portfolio.

Click on the image below for an example peer review form

Hints and tips

  • Peer review has been identified as one way in which teaching practices can be improved through objective feedback from colleagues
  • In order to gain the maximum benefit from peer review, there are processes that can be followed, rather than taking an ad hoc approach
  • Peer review should always be followed by a plan of action. Without acting on the feedback, the process is little more than an administrative exercise

References and sources
Peer Review of Teaching for Promotion Applications: Peer Observation of Classroom Teaching. Information, Protocols and Observation Form for Internal Peer Review Team.

Babbie, E. & Mouton, J. (2006). The Practice of Social Research. Oxford University Press. ISBN: 0195718542.

Brent, R. & Felder, R.M. (2004). A Protocol for Peer Review of Teaching. Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition.

Butcher, C. Davies, C. and Highton, M. (2006). Designing Learning. From Module Outline to Effective Teaching. Routledge. ISBN: 9780415380300.

Graduate Attributes (2006). Curtin University of Technology.

Peer Observation Guidelines and Recommendations. University of Minnesota, Center for Teaching and Learning.

Additional reading and resources
Classroom Observation Instruments. University of Minnesota, Center for Teaching and Learning. (a list of instruments that you can use in your own teaching practice)

McKenzie, J. & Parker, N. (2011). Peer review in online and blended learning environments. Report from the Australian Learning and Teaching Council.

Harris, K., Farrell, K., Bell, M., Devlin, M. & James, R. (2008). Peer Review of Teaching in Australian Higher Education. A handbook to support institutions in developing and embedding effective policies and practices. Centre for the Study of Higher Education. ISBN 9780734040459.

Resources on Peer Observation and Review. University of Minnesota, Center for Teaching and Learning.

Teaching and learning workshop at Mont Fleur

Photo taken while on a short walk during the retreat.

A few weeks ago I spent 3 days at Mont Fleur near Stellenbosch, on a teaching and learning retreat. Next year we’re going to be restructuring 2 of our modules as part of a curriculum review, and I’ll be studying the process as part of my PhD. That part of the project will also form a case study for an NRF-funded, inter-institutional study on the use of emerging technologies in South African higher education.

I used the workshop as an opportunity to develop some of the ideas for how the module will change (more on that in another post), and these are the notes I took during the workshop. Most of what I was writing was specific to the module I was working with, so these notes are the more generic ones that might be useful for others.

————————

Content determines what we teach, but not how we teach. But it should be the outcomes that determine the content?

“Planning” for learning

Teaching is intended to make learning possible / there is an intended relationship between teaching and learning

Learning = a recombination of old and new material in order to create personal meaning. Students bring their own experience from the world that we can use to create a scaffold upon which to add new knowledge

We teach what we usually believe is important for them to know

What (and how) we teach is often constrained by external factors:

  • Amount of content
  • Time in which to cover the content (this is not the same as “creating personal meaning”)

We think of content as a series of discrete chunks of an unspecified whole, without much thought given to the relative importance of each topic as it relates to other topics, or about the nature of the relationships between topics

How do we make choices between what to include and exclude?

  • Focus on knowledge structuring
  • What are the key concepts that are at the heart of the module?
  • What are the relationships between the concepts?
  • This marks a shift from dis-embedded facts to inter-related concepts
  • This is how we organise knowledge in the discipline

Task: map the knowledge structure of your module

“Organising knowledge” in the classroom is problematic because knowledge isn’t organised in our brains in the same way that we organise it for students / on a piece of paper. We assign content to discrete categories to make it easier for students to understand / add it to their pre-existing scaffolds, but that’s not how it exists in minds.

Scientific method (our students do a basic physics course in which this method is emphasised, yet they don’t transfer this knowledge to patient assessment):

  1. Observe something
  2. Construct an hypothesis
  3. Test the hypothesis
  4. Is the outcome new knowledge / expected?

Task: create a teaching activity (try to do something different) that is aligned with a major concept in the module, and also includes graduate attributes and learning outcomes. Can I do the poetry concept? What about gaming? Learners are in control of the environment, mastering the task is a symbol of valued status within the group, a game is a demarcated learning activity with set tasks that the learner has to master in order to proceed, feedback is built in, games can be time and resource constrained

The activity should include the following points:

  • Align assessment with outcomes and teaching and learning activities (SOLO taxonomy – Structured Observation of Learning Outcomes)
  • Select a range of assessment tools
  • Justify the choice of these tools
  • Explain and defend marks and weightings
  • Meet the criteria for reliability and validity
  • Create appropriate rubrics

Assessment must be aligned with learning outcomes and modular content. It provides students with opportunities to show that they can do what is expected of them. Assessment currently highlights what students don’t know, rather than emphasising what they can do, and looking for ways to build on that strength to fill in the gaps.

Learning is about what the student does, not what the teacher does.

How do you create observable outcomes?

The activity / doing of the activity is important

As a teacher:

  • What type of feedback do you give?
  • When do you give it?
  • What happens to it?
  • Does it lead to improved learning?

Graduate attributes ↔ Learning outcomes ↔ Assessment criteria ↔ T&L activities ↔ Assessment tasks ↔ Assessment strategy

Assessment defines what students regard as important, how they spend their time and how they come to see themselves as individuals (Brown, 2001; in Irons, 2008: 11)

Self-assessment is potentially useful, although it should be low-stakes

Use a range of well-designed assessment tasks to address all of the Intended Learning Outcomes (ILOs) for your module. This will help to provide evidence to teachers of the students competence / understanding

In general quantitative assessment uses marks while qualitative assessment uses rubrics

Checklist for a rubric:

  • Do the categories reflect the major learning objectives?
  • Are there distinct levels which are assigned names and mark values?
  • Are the descriptions clear? Are they on a continuum and allow for student growth?
  • Is the language clear and easy for students to understand?
  • Is it easy for the teacher to use?
  • Can the rubric be used to evaluate the work? Can it be used for assessing needs? Can students easily identify growth areas needed?

Evaluation:

  • What were you evaluating and why?
  • When was the evaluation conducted?
  • What was positive / negative about the evaluation?
  • What changes did you make as a result of the feedback you received?

Evaluation is an objective process in which data is collected, collated and analysed to produce information or judgements on which decisions for practice change can be based

Course evaluation can be:

  • Teacher focused – for improvement of teaching practice
  • Learner focused – determine whether the course outcomes were achieved

Evaluation be conducted at any time, depending on the purpose:

  • At the beginning to establish prior knowledge (diagnostic)
  • In the middle to check understanding (formative) e.g. think-pair-share, clickers, minute paper, blogs, reflective writing
  • At the end to determine the effectiveness of the course / to determine whether outcomes have been achieved (summative) e.g. questionnaires, interviews, debriefing sessions, tests

Obtaining information:

  • Feedback from students
  • Peer review of teaching
  • Self-evaluation

References

  • Knight (n.d.). A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment
  • Morgan (2008). The Course Improvement Flowchart: A description of a tool and process for the evaluation of university teaching