Categories
technology writing

Give feedback on “A critical digital pedagogy for education in the 21st century”?

Update (12-02-18): You can now download the full chapter here (A critical pedagogy for online learning in physiotherapy education) and the edited collection here.

I finally managed to put together some ideas for my chapter on critical digital pedagogy in the CPN book on critical perspectives in practice. I split the chapter into 4 sections, excluding an introduction and conclusion (because they’re likely to change with future editing), which you can find here:

  1. Background: In which I explain the point of this short series of posts.
  2. Command and control: In which I describe how higher education today revolves around the idea that students should sit still, be quiet, and do nothing that might be considered interesting or creative.
  3. Weapon of mass instruction: In which I argue that technology is being used to reinforce the conditions promoting conformity and  a culture of oppression.
  4. Education as the practice of freedom: In which I discuss critical pedagogy as a way of thinking about teaching that aims to liberate students and teachers from institutionalised education.
  5. Teaching at the edges of chaos: In which I explore some aspects of the open web that may be used to implement a critical digital pedagogy in higher education.

Now that the draft is finished, I thought I’d try a little experiment. In addition to being able to comment on the posts above, I wondered what it would be like to get public feedback on the whole chapter. I’ve shared the document in Google Drive and would love to hear any thoughts you may have on it. If you’d like, you can also download the full document as a PDF here. Please note that this is a first complete draft and so there’s probably going to still be some heavy editing.

As Yeats said: “I have spread my dreams under your feet. Tread softly for you tread on my dreams.”

Categories
teaching

I have spread my dreams under your feet…

dreams

I try to keep this in mind whenever I give feedback.

Categories
assessment learning

Providing students with audio feedback

I’ve started providing my students with audio feedback on a set of about 60 clinical case studies that they recently submitted. I was depressed at the thought of having to write out my feedback; I tend to provide a lot of detail because I almost always try to provide a rationale for the comments I’ve made. I want the students to understand why I’m suggesting the changes, which can be really time consuming when I have a lot of documents.

This semester I decided to try audio feedback (Cavanaugh & Song, 2010) as a method of providing input on the students’ drafts and I have to say, it’s been fantastic. I take about the same amount of time per document (10 – 15 minutes) because I find that I give a more detail in my spoken feedback, compared to the written feedback, so this is not about saving time. I realised that when I write / type comments there are some points I don’t make because in order to explain the reason for the comment would take more space than the margin allows.

In addition, I’ve found that I use a more conversational tone – which the students really appreciate – and because I’m actually speaking to the student, I pay less attention to line items e.g. spelling corrections and punctuation issues. In other words, I give more global comments instead of local comments, and obviously don’t use Track Changes. As I mentioned earlier, I provide more detail, explaining the reasons behind certain points I make, going into the reasons for why it’s important that they address the comment.

Students’ have given me feedback on this process and 100% of those who responded to my request for comment have suggested that this method of receiving feedback is preferable for them. One of them reported that hearing my comments on his draft allowed him to “hear how I think”. This comment reminded me of the thinking aloud protocol, which is a way for experts to model thinking practices to novices (Durning et al. 2014). This insight led to a slight change in how I structured the feedback, where I now “think” my way through the piece, even pausing to tell the student that I’m struggling to put into words an impression or feeling that I experienced while reading. I try to make it as “real time” as possible, imagining that I’m speaking to the student directly.

I record to .mp3 at a sample rate of 44 K/Hz and a bit rate of 128 kbit/s, which offers decent audio quality at a low enough file size to make emailing feasible. This is my basic process for recording audio feedback:

  1. Read through the entire document, making mental notes of the most important points I want to make
  2. Go back to the beginning of the document and start the recorder
  3. Greet the student and identify the piece I’m commenting on
  4. Provide an overview of my thoughts on the document in it’s entirety (structure, headings, logical flow, etc.)
  5. Work through the different sections of the document (e.g. Introduction, Method, Results, etc.), providing more detailed thoughts on the section, pausing the recorder between sections to give myself time to identify specific points I want to make
  6. End with a summing up of what was done well and the 3-5 major points that need to be addressed
  7. Stop the recorder, rename the audio file (student name – course code – title) and email it to the student

Reference

Categories
assessment

Accepting student work as a gift

Selection_001A few months ago we invited a colleague from the institution to give a short presentation in my department, sharing some of her ideas around research. At some point in the session, she said “I offer this to you, because…”. I forget the rest of the sentence but what was striking to me was how it had begun. It really resonated with something I’d read earlier this year, from Ronald Barnett’s book “A will to learn: Being a student in an age of uncertainty“. From Barnett:

Here are gifts given without any hope of even a ‘thank-you’, yet this ‘gift-giving’ looks for some kind of return. The feedback may come late; the marks may not be as hoped, but the expectation of some return is carried in these gifts. The student’s offerings are gifts and deserve to be recognized as such, despite their hoped-for return.

….

The language that I have in mind is one of proffering, of tendering, of offering, of sharing, and of presenting and gifting. The pedagogical relationship may surely be understood in just these terms, as a setting of gift-giving that at least opens a space for mutual obligations attendant upon gift-giving.

….

In the pedagogical setting, the student engages in activities circumscribed by a curriculum. Those activities are implicitly judged to be worthwhile, for the curriculum has characteristically been formally sanctioned (typically through a university’s internal course validation procedures). However, those curricula activities are not just worthwhile in themselves for they are normally intended to lead somewhere. In that leading somewhere, there is something that emerges, whether it be the result of a laboratory experiment, a problem that has been solved, an essay that has been submitted or a design that has been created. These are pedagogical offerings.

….

Both the teacher and the taught put themselves forward, offer themselves, give themselves. They even, to some extent, exchange themselves.

I think that there is something incredibly powerful that happens when we begin to think about the work that the student submits (offers) as a gift. Something that they have given of themselves, a representation of the time, effort and thought they have put into a creative work. If we think about the student’s offering as a gift, surely it must change the way it is treated and the way we respond? How does feedback and assessment change if we think of them as responses to gifts? Or, as gifts themselves? Would our relationships with students change (be enhanced?) if we thought of their submissions and our feedback as mutual gifts, offered to each other as representations of who we are?

Categories
assessment physiotherapy

Understanding vs knowing

Final exams vs. projects – nope, false dichotomy: a practical start to the blog year (by Grant Wiggins)

Students who know can:

  • Recall facts
  • Repeat what they’ve been told
  • Perform skills as practiced
  • Plug in missing information
  • Recognize or identify something that they’ve been shown before

Whereas students who understand can:

  • Justify a claim
  • Connect discrete facts on their own
  • Apply their learning in new contexts
  • Adapt to new circumstances, purposes or audiences
  • Criticize arguments made by others
  • Explain how and why something is the case

IF understanding is our aim, THEN the majority of the assessments (or the weighting of questions in one big assessment) must reflect one or more of the phrases above.

In the Applied Physiotherapy module that we teach using a case-based learning approach, we’re trying to structure our feedback to students in terms that help them to construct their work in ways that explicitly address the items listed above. We use Google Drive to give feedback to students as they develop their own notes, and try to ensure that the students are expressing their understanding by creating relationships between concepts.

One of the major challenges has been to shift mindsets (both students’ and facilitators’) away from the idea that knowing facts is the same as understanding. As much as we try to emphasise that one can know many facts and still not understand, it’s still clear that this distinction does not come easily to everyone. Both students and some colleagues believe that knowing as many facts as possible is the key to being a strong practitioner, even though the evidence shows that decontextualised knowledge is not helpful in practice situations.

The list above, describing what students understanding “looks like”, is helpful in getting our facilitators and students who struggle with the shift in thinking, to better grasp what we’re aiming for.

Categories
assessment

Workplace-based assessment

Yesterday I attended a workshop / seminar on workplace-based assessment given by John Norcini, president of FAIMER and creator of the mini-CEX. Here are the notes I took.

Methods
Summative (“acquired learning” that’s dominated assessment) and formative (feedback that helps to learn, assessment for learning)

The methods below into the workplace, require observation and feedback

Portfolios (“collection of measures”) are workplace-based / encounter-based and must include observation of the encounter and procedures, with a patient record audit i.e. 360 degree assessment. Trainee evaluated on the contents of the portfolio. The training programme maintains the portfolio, but the trainee may be expected to contribute to it.

“Tick box”-type assessment isn’t necessarily a problem, it depends on how faculty observe and assess the tasks on the list.

Other: medical knowledge test

The following assessment methods are all authentic, in the sense that they need to be based in the real world, and assesses students on what they are actually doing, not what they do in an “exam situation”.

Mini-CEX
Assessor observes a trainee during a brief (5-10 min) patient encounter, and evaluates trainee on a few aspects /dimensions of the encounter. Assessor then provides feedback. Ideally should be different patients, different assessors, different aspects. Should take 10-15 minutes.

Direct observation of procedural skills (DOPS)
10-15 exercise, faculty observe a patient encounter, emphasis on procedures, assessor rates along a no. of dimentsions, assessor then provides feedback.

Chart stimulated recall
Assessor reviews a patient record where trainee makes notes. Discussion centred on the trainee’s notes, and rates things like diagnoses, planning, Rx, etc. Has an oral exam with trainee, asking questions around clinical reasoning based on the notes. Takes 10-15 minutes, and should be over multiple encounters. Must use actual patient records → validity / authentic.

360 degree evaluation
Trainee nominates peers, faculty, patients, self, etc. who then evaluate the trainee. Everyone fills out the same form, which assesses clinical and generic skills. Trainee is given self-ratings, assessor ratings, mean ratings. Discrepency forms a foundation for discussion around the misconceptions. Good to assess teamwork, communication, interpersonal skills, etc.

There are forms available for these tasks, but in reality, since it’s formative, you can make up a form that makes sense for your own profession. These assessments are meant to be brief, almost informal, encounters. They should happen as part of the working process, not scheduled as part of an “evaluation” process. This should also not replace a more comprehensive, in-depth evaluation. They may also be more appropriate for more advanced trainees, and undergrad students may be better served with a “tick-list”-type assessment tool, since they’re still learning what to do.

Don’t aim for objectivity, aim for consensus. Aggregating subjective judgements brings us to what we’re calling “objective”. We can’t remove subjectivity, even in the most rigorous MCQs, as it’s human beings that make choices about what to include, etc. So, objectivity, is actually impossible to achieve. But consensus can be achieved.

For these methods, you can make the trainee responsible for the process (i.e. they can’t progress / complete without doing all the tasks), so the trainee decides which records, when it takes place, who will assess. This creates an obvious bias. Or, faculty can drive the process, in which case it often doesn’t get done.

Why are workplace methods good for learning?
Good evidence that trainees are not observed often during their learning i.e. lack of formative assessment during the programme. Medical students are often observed for less than 10% of their time in the clinical settings. If the trainees aren’t being observed and getting feedback related to that performance.

WPBA is crtical for learning and have a significant influence on achievement. One of the 4 major factors that influence learning is feedback, which counts for massive effect sizes in learning. Feedback alone is often effective in creating achievement in 70% of studies. Feedback is based on observation. Good feedback is often about providing sensitive information to individuals, which can be challenging in a group. Positive feedback given early in training can have long-lasting effects, and can be given safely in groups.

Feedback given by different professions, at different levels, is a good thing for trainees. So, observation of procedures, etc. should be done by a variety of people, in a variety of contexts. People should be targeted for feedback, based on the type of feedback they’re most appropriate to give i.e. to give feedback on what they do best. So, it’s fine for a physio to give feedback on a doctor’s performance, but it might be about teamwork ability, rather than medical knowledge.

Giving feedback is different from giving comments. Feedback creates a pathway to improvement of learning, whereas comments might just make students feel better for a short period of time.

Types of training

Massed – many people together for a short period of time, is intense, is faster, results in higher levels of confidence among trainees, and greater satisfaction

Spaced – many people, spread out over time, results in longer retention and better performance

Retrieval of information or a perfomance enhances learning. Learning isn’t about information going in, it’s also about how to retrieve information. Testing forces retrieval. Regular repetition of a performance leads to better performance of a task.

Faculty don’t agree with direct observation of performance, on the quality of the performance. So, you need to have several observations.
All patients are different, so you have to have observations of several patients.
The time frame for a long-case assessment is unreasonable in the real world, so assessment should be within a time frame that is authentic.

WPBA focuses on formative assessment, requires observation and feedback, directs and cretes learning, responds to the problems of traditional clinical assessment.

Rating students on a scale of unsatisfactory, satisfactory, etc. is formative and doesn’t carry the weight as the weight of a pass / fail, summative assessment. We also need to make sure that dimensions of the assessment are commonly defined or understood, and that faculty expectations for the assessment are the same.

Assessment forms should be modified to suit the context it is to be used in.

Gobal vs. check list assessments
Mini-CEX is a type of global i.e. it’s a judgement based on a global perception of the trainee. Our assessments are more global assessments. The descriptions of behaviours / dimensions are meant to indicate assessors with what they should be thinking about during the assessment.
A check list is a list of behaviours, and when the behaviour occurs, the trainee gets a tick.
Our assessment forms were mixing the two types of form, which may be why there were problems.

Faculty development should aim to “surface disagreement”, because that is how you generate discussion.

Conducting the encounter

  • Be prepared and have goals for the session
  • Put youself into the right posotion
  • Minimise external interruptions
  • Avoid intrusions

Characteristics of effective faculty development programmes (Skeff, 1997) – link to PDF

Faculty training / workshops are essential to prepare faculty to use the tools. It makes them more comfortable, as well as more stringent with students. If you’re not confident in your own ability, you tend to give students the benefit of the doubt. Workshops can be used to change role model behaviours.

Feedback

  • Addressees three aspects: Where am I going? How am I going? Where to next?
  • Four areas that feedback can focus on: task, process, self-regulation, self as a person (this last point is rarely effective, and should be avoided, therefore feedback must focus on behaviour, not on the person)
  • Response to feedback is influenced by the trainees level of achievement, their culture, perceptions of the accuracy of the feedback, perceptions of credbility and trustworthiness of the assessor, perceptions of the usefulness of the feedback
  • Technique of the assessor influences the impact that the feedback has: establish appropriate interpersonal climate, appropriate location, elicit trainees feelings and thoughts, focus on observed behaviours, be non judgemental, be specific, offer right amount of feedback (avoid overwhelming), suggestions for improvement
  • Provide an action plan and close the loop by getting student to submit something

Novice student: emphasis feedback on the task / product / outcome
Intermediate student: emphasise specific processes related to the task / performance
Advanced student: emphasise global process that extends beyond this specific situation e.g. self-regulation, self-assessment.

Necessary to “close-the-loop” so give students something to do i.e. an action plan that requires the student to go away and do something concrete that aims to improve an aspect of their performance.

Asking students what their impressions of the task were, is a good way to set up self-regulation / self-assessment by the student.

Student relf-report on something like confidence may be valid, but student self-report on competence is probably not, because students are not good judges of their own competence.

Summary
Provide an assessment of strengths and weaknesses, enable learner reaction, encourage self-assessment, develop an aciton plan.

Quality assurance in assessment (this aspect of the workshop conducted by Dr. Marietjie de Villiers)

Coming to a consensual definition:

  • External auditors (extrinsic) vs self-regulated (intrinsic)
  • Developing consensus as to what is being assessed, how, etc. i.e. developing outcomes
  • Including all role players / stakeholders
  • Aligning outcomes, content, teaching strategies, assessment i.e. are we using the right tools for the job?
  • “How can I do this better?”
  • Accountability (e.g. defending a grade you’ve given) and responsibility
  • There are logistical aspects to quality assurance i.e. beaurocracy and logistics
  • A quality assurance framework may feel like a lot of work when everything is going smoothly, but it’s an essential “safety net” when something goes wrong
  • Quality assurance has no value if it’s just “busy work” – it’s only when it’s used to change practice, that it has value
  • Often supported with a legal framework

Some quality assurance practices by today’s participants:

  • Regular review of assessment practices and outcomes can identify trends that may not be visible at the “gound level”.
  • Problems identified should lead to changes in practice.
  • Train students how to prepare for clinical assessments. Doesn’t mean that we should coach them, but prepare them for what to expect.
  • Student feedback can also be valuable, especially if they have insight into the process.
  • Set boundaries, or constraints on the assessment so that people are aware that you’re assessing something specific, in a specific context.
  • Try to link every procedure / skill to a reference, so that every student will refer back to the same source of information.
  • Simulating a context is not the same as using the actual context.
  • Quality assurance is a long-term process, constantly being reviewed and adapted.
  • Logistical problems with very large student groups require some creativity in assessment, as well as the evaluation of the assessment.
  • Discuss the assessment with all participating assessors to ensure some level of consensus re. expectations, at a pre-exam meeting. Also have a post-exam meeting to discuss outcomes and discrepencies.
  • Include external examiners in the assessment process. These external examiners should be practicing clinicians.

When running a workshop, getting input from external (perceived to be objective) people can give what you’re trying to do an air of credibility that may be missing, especially if you’re presenting to peers / colleagues.

2 principles:
Don’t aim for objectivity, aim for consensus
Multiple sources of input can improve the quality of the assessment

2 practical things:
Get input from internal and external sources when developing assessment tasks
Provide a standard source for procedures / skills so that all students can work from the same perspective

Article on work based assessment from BMJ

Categories
twitter feed

Twitter Weekly Updates for 2012-04-02

Categories
diigo

Posted to Diigo 01/16/2012

    • the CoI theoretical framework is essentially incompatible with traditional distance education approaches that value independence and autonomy over collaborative discourse in purposeful communities of inquiry (Garrison, 2009)
    • the explanatory value of a CoI approach depends on the educational purpose and context
    • it is very difficult to achieve deep understanding without discourse
    • While this may be accomplished through Socratic dialogue or in a one-to-one tutorial with a qualified instructor, it is totally impractical in most educational contexts (especially scalable distance education)
    • Discounting SP is to discount the importance of critical discourse in a connected, knowledge based society
    • It is also difficult to see how one gains metacognitive awareness and ability without sustained discourse and feedback (Akyol & Garrison, 2011). This may well be one of the great weaknesses of independent study and didactic approaches.
    • The CoI is a generic theoretical framework that must be viewed as a means to study collaborative constructivist educational transactions – be they in online, blended or face-to-face environments
    • The validation of this framework would also suggest that it can also be used as a rubric to test for functioning communities of inquiry
    • I think one of the main problems with CoI research is the tendency to consider every online/blended learning environment is a true community of inquiry design when, in fact, there is little teaching, cognitive or social presence (students are reliant on independent activities and tests)
    • the categories of SP are open to refinement but are not necessarily compatible with independent (or informal) learning activities and should not be critiqued from this perspective
    • revised definition of SP “as the ability of participants to identify with the group or course of study, communicate purposefully in a trusting environment, and develop personal and affective relationships progressively by way of projecting their individual personalities” (Garrison, 2011, p.34)

Categories
assessment learning physiotherapy students teaching workshop

Teaching and learning workshop at Mont Fleur

Photo taken while on a short walk during the retreat.

A few weeks ago I spent 3 days at Mont Fleur near Stellenbosch, on a teaching and learning retreat. Next year we’re going to be restructuring 2 of our modules as part of a curriculum review, and I’ll be studying the process as part of my PhD. That part of the project will also form a case study for an NRF-funded, inter-institutional study on the use of emerging technologies in South African higher education.

I used the workshop as an opportunity to develop some of the ideas for how the module will change (more on that in another post), and these are the notes I took during the workshop. Most of what I was writing was specific to the module I was working with, so these notes are the more generic ones that might be useful for others.

————————

Content determines what we teach, but not how we teach. But it should be the outcomes that determine the content?

“Planning” for learning

Teaching is intended to make learning possible / there is an intended relationship between teaching and learning

Learning = a recombination of old and new material in order to create personal meaning. Students bring their own experience from the world that we can use to create a scaffold upon which to add new knowledge

We teach what we usually believe is important for them to know

What (and how) we teach is often constrained by external factors:

  • Amount of content
  • Time in which to cover the content (this is not the same as “creating personal meaning”)

We think of content as a series of discrete chunks of an unspecified whole, without much thought given to the relative importance of each topic as it relates to other topics, or about the nature of the relationships between topics

How do we make choices between what to include and exclude?

  • Focus on knowledge structuring
  • What are the key concepts that are at the heart of the module?
  • What are the relationships between the concepts?
  • This marks a shift from dis-embedded facts to inter-related concepts
  • This is how we organise knowledge in the discipline

Task: map the knowledge structure of your module

“Organising knowledge” in the classroom is problematic because knowledge isn’t organised in our brains in the same way that we organise it for students / on a piece of paper. We assign content to discrete categories to make it easier for students to understand / add it to their pre-existing scaffolds, but that’s not how it exists in minds.

Scientific method (our students do a basic physics course in which this method is emphasised, yet they don’t transfer this knowledge to patient assessment):

  1. Observe something
  2. Construct an hypothesis
  3. Test the hypothesis
  4. Is the outcome new knowledge / expected?

Task: create a teaching activity (try to do something different) that is aligned with a major concept in the module, and also includes graduate attributes and learning outcomes. Can I do the poetry concept? What about gaming? Learners are in control of the environment, mastering the task is a symbol of valued status within the group, a game is a demarcated learning activity with set tasks that the learner has to master in order to proceed, feedback is built in, games can be time and resource constrained

The activity should include the following points:

  • Align assessment with outcomes and teaching and learning activities (SOLO taxonomy – Structured Observation of Learning Outcomes)
  • Select a range of assessment tools
  • Justify the choice of these tools
  • Explain and defend marks and weightings
  • Meet the criteria for reliability and validity
  • Create appropriate rubrics

Assessment must be aligned with learning outcomes and modular content. It provides students with opportunities to show that they can do what is expected of them. Assessment currently highlights what students don’t know, rather than emphasising what they can do, and looking for ways to build on that strength to fill in the gaps.

Learning is about what the student does, not what the teacher does.

How do you create observable outcomes?

The activity / doing of the activity is important

As a teacher:

  • What type of feedback do you give?
  • When do you give it?
  • What happens to it?
  • Does it lead to improved learning?

Graduate attributes ↔ Learning outcomes ↔ Assessment criteria ↔ T&L activities ↔ Assessment tasks ↔ Assessment strategy

Assessment defines what students regard as important, how they spend their time and how they come to see themselves as individuals (Brown, 2001; in Irons, 2008: 11)

Self-assessment is potentially useful, although it should be low-stakes

Use a range of well-designed assessment tasks to address all of the Intended Learning Outcomes (ILOs) for your module. This will help to provide evidence to teachers of the students competence / understanding

In general quantitative assessment uses marks while qualitative assessment uses rubrics

Checklist for a rubric:

  • Do the categories reflect the major learning objectives?
  • Are there distinct levels which are assigned names and mark values?
  • Are the descriptions clear? Are they on a continuum and allow for student growth?
  • Is the language clear and easy for students to understand?
  • Is it easy for the teacher to use?
  • Can the rubric be used to evaluate the work? Can it be used for assessing needs? Can students easily identify growth areas needed?

Evaluation:

  • What were you evaluating and why?
  • When was the evaluation conducted?
  • What was positive / negative about the evaluation?
  • What changes did you make as a result of the feedback you received?

Evaluation is an objective process in which data is collected, collated and analysed to produce information or judgements on which decisions for practice change can be based

Course evaluation can be:

  • Teacher focused – for improvement of teaching practice
  • Learner focused – determine whether the course outcomes were achieved

Evaluation be conducted at any time, depending on the purpose:

  • At the beginning to establish prior knowledge (diagnostic)
  • In the middle to check understanding (formative) e.g. think-pair-share, clickers, minute paper, blogs, reflective writing
  • At the end to determine the effectiveness of the course / to determine whether outcomes have been achieved (summative) e.g. questionnaires, interviews, debriefing sessions, tests

Obtaining information:

  • Feedback from students
  • Peer review of teaching
  • Self-evaluation

References

  • Knight (n.d.). A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment
  • Morgan (2008). The Course Improvement Flowchart: A description of a tool and process for the evaluation of university teaching
Categories
diigo gaming learning research social media teaching technology

Posted to Diigo 08/17/2011

I did a lot of reading and highlighting the other night, which is why this is so long. I’ve been bookmarking a lot of articles (about 400 at the last count) over the past 6 months or so, and will be trying to get through them over the next few months. There might be more long posts like this one (aggregationsof Diigo highlights) as a consequence.

    • I truly believe that a combination of actively influencing a story line in combination with a reaction upon the decisions taken would make learners feel more appreciated or valued if you will and encourage them to continue learning with that program instead of only getting negative feedback in from of a summary assessment when a chapter or course is finished
    • According to Rita Kop PLE is a UK term and PLN an American term. Dave Cormier questions whether the term personal should be used at all. Stephen Downes points out that personal is an OK term if you think about [Personal Learning] Network as opposed to [Personal] Learning Network – and similarly for PLE
    • the words are not as important as the process
    • a Personal Learning Environment (PLE) is more concerned with tools and technology and that Personal Learning Networks (PLN) are more concerned with connections to people
    • The PLE takes me to my PLN through various gates and paths
    • they’re the ticket and ride, not the destination
    • The PLN is then more akin to a community, but with much looser connections, described in the literature as “weak ties”
    • possible roles involved in networked learning that the teacher may be classified as (Expert: Someone with sustained contribution to a field, Teacher: experts with authority, Curator: play the role of interpreting, organizing, and presenting content, Facilitator: able to guide, direct, lead, and assist learners, not necessarily being a subject matter expert
    • why focus on PLEs? Shouldn’t we be trying to figure out how to make PLN work better?
    • Development of your PLE is about working with technology, refining your use of tools to give you more keys or more efficient access to your network of people and resources
    • “Pundits may be asking if the Internet is bad for our children’s mental development, but the better question is whether the form of learning and knowledge-making we are instilling in our children is useful to their future.”
    • we can’t keep preparing students for a world that doesn’t exist
    • The contemporary American classroom, with its grades and deference to the clock, is an inheritance from the late 19th century. During that period of titanic change, machines suddenly needed to run on time. Individual workers needed to willingly perform discrete operations as opposed to whole jobs. The industrial-era classroom, as a training ground for future factory workers, was retooled to teach tasks, obedience, hierarchy and schedules.
    • Teachers and professors regularly ask students to write papers. Semester after semester, year after year, “papers” are styled as the highest form of writing.
      • And yet they will probably never have to communicate anything in that format ever again…unless they also become academics
    • question the whole form of the research paper
    • “What if bad writing is a product of the form of writing required in school — the term paper — and not necessarily intrinsic to a student’s natural writing style or thought process?”
    • A classroom suited to today’s students should de-emphasize solitary piecework
    • That classroom needs new ways of measuring progress, tailored to digital times — rather than to the industrial age or to some artsy utopia where everyone gets an Awesome for effort.
    • Blended learning lets designers split off prerequisite material from the rest  of a course
    • Blended learning lets instructional designers separate rote content focusing  on lower-order thinking skills, which can be easily taught online, from critical  thinking skills, which many instructors feel more comfortable addressing  in the classroom
    • Learners can have more meaningful conversations about these  topics because they have developed a familiarity with basic management  policies and procedures and have had time to integrate what they know into  their thinking
    • We cannot have it both ways: quality of thinking and speed are anathema to each other.
    • Covering content is daunting enough, but providing the time necessary to indulge in the quality conversations that make learning truly engaging is almost impossible
    • the challenge of articulating thoughts quickly
    • post two dynamic questions online each night. These questions have many possible answers, require analysis of content and the creation of unique ideas
    • when we revisit these discussions in the classroom, students have a plethora of ideas to share. They are no longer scared to speak out because they have a confidence born from their online discussions and the validation of their peers
    • weave those online conversations back into the classroom
      • “Some students have great ideas, but they experience difficulty expressing those ideas clearly.
    • Good practice in undergraduate education:
    • We address the teacher’s how, not the subject-matter what, of good  practice in undergraduate education. We recognize that content and pedagogy interact in  complex ways.
    • An undergraduate education should prepare students to  understand and deal intelligently with modern life.
    • 1. Encourages Contact Between Students and Faculty  Frequent student-faculty contact in and out of classes is the most   important factor in student motivation and involvement. Faculty   concern helps students get through rough times and keep on working.   Knowing a few faculty members well enhances students’ intellectual   commitment and encourages them to think about their own values and   future plans.
    • 2. Develops Reciprocity and Cooperation Among Students  Learning is enhanced when it is more like a team effort that a   solo race. Good learning, like good work, is collaborative and social,   not competitive and isolated. Working with others often increases   involvement in learning. Sharing one’s own ideas and responding to   others’ reactions sharpens thinking and deepens understanding.
    • 3. Encourages Active Learning  Learning is not a spectator sport. Students do not learn much just   by sitting in classes listening to teachers, memorizing pre-packaged   assignments, and spitting out answers. They must talk about what they   are learning, write about it, relate it to past experiences and apply   it to their daily lives. They must make what they learn part of   themselves.
    • 4. Gives Prompt Feedback  Knowing what you know and don’t know focuses learning. Students   need appropriate feedback on performance to benefit from courses.   When getting started, students need help in assessing existing   knowledge and competence. In classes, students need frequent   opportunities to perform and receive suggestions for improvement. At   various points during college, and at the end, students need chances   to reflect on what they have learned, what they still need to know,   and how to assess themselves.
    • 5. Emphasizes Time on Task  Time plus energy equals learning. There is no substitute for time   on task. Learning to use one’s time well is critical for students and   professionals alike. Students need help in learning effective time   management. Allocating realistic amounts of time means effective   learning for students and effective teaching for faculty. How an   institution defines time expectations for students, faculty,   administrators, and other professional staff can establish the basis   of high performance for all.
    • 6. Communicates High Expectations  Expect more and you will get more. High expectations are important   for everyone — for the poorly prepared, for those unwilling to exert   themselves, and for the bright and well motivated. Expecting students   to perform well becomes a self-fulfilling prophecy when teachers and   institutions hold high expectations for themselves and make extra  efforts.
    • 7. Respects Diverse Talents and Ways of Learning  There are many roads to learning. People bring different talents   and styles of learning to college. Brilliant students in the seminar   room may be all thumbs in the lab or art studio. Students rich in   hands-on experience may not do so well with theory. Students need the   opportunity to show their talents and learn in ways that work for them.   Then they can be pushed to learn in new ways that do not come so easily.
    • tell real stories from your own life in a way that is relevant and engaging to your audience. If more people could just remember that great speeches or presentations leverage the power of the speaker’s own stories
    • we must not talk ourselves out of being who we really are
    • People do not care about your excuses, they care only about seeing your authentic self
    • People crave authenticity just about more than anything else, and one way to be your authentic self and connect with an audience is by using examples and stories from your own life that illuminate your message in an engaging, memorable way