We Need Transparency in Algorithms, But Too Much Can Backfire

The students had also been asked what grade they thought they would get, and it turned out that levels of trust in those students whose actual grades hit or exceeded that estimate were unaffected by transparency. But people whose expectations were violated – students who received lower scores than they expected – trusted the algorithm more when they got more of an explanation of how it worked. This was interesting for two reasons: it confirmed a human tendency to apply greater scrutiny to information when expectations are violated. And it showed that the distrust that might accompany negative or disappointing results can be alleviated if people believe that the underlying process is fair.

Source: We Need Transparency in Algorithms, But Too Much Can Backfire

This article uses the example of algorithmic grading of student work to discuss issues of trust and transparency. One of the findings I thought was a useful takeaway in this context is that full transparency may not be the goal, but that we should rather aim for medium transparency and only in situations where students’ expectations are not met. For example, a student who’s grade was lower than expected might need to be told something about how it was calculated. But when they got too much information it eroded trust in the algorithm completely. When students got the grade they expected then no transparency was needed at all i.e. they didn’t care how the grade was calculated.

For developers of algorithms, the article also provides a short summary of what explainable AI might look like. For example, without exposing the underlying source code, which in many cases is proprietary and holds commercial value for the company, explainable AI might simply identify the relationships between inputs and outcomes, highlight possible biases, and provide guidance that may help to address potential problems in the algorithm.

Public posting of marks

My university has a policy where the marks for each assessment task are posted – anonymously – on the departmental notice board. I think it goes back to a time when students were not automatically notified by email and individual notifications of grades would have been too time consuming. Now that our students get their marks as soon as they are captured in the system, I asked myself why we still bother to post the marks publicly.

I can’t think of a single reason why we should. What is the benefit of posting a list of marks where students are ranked against how others performed in the assessment? It has no value – as far as I can tell – for learning. No value for self-esteem (unless you’re performing in the higher percentile). No value for the institution or teacher. So why do we still do it?

I conducted a short poll among my final year ethics students asking them if they wanted me to continue posting their marks in public. See below for their responses.

selection_001

Moving forward, I will no longer post my students marks in public nor will I publish class averages, unless specifically requested to do so. If I’m going to say that I’m assessing students against a set of criteria rather than against each other, I need to have my practice mirror this. How are students supposed to develop empathy when we constantly remind them that they’re in competition with each other?

Twitter Weekly Updates for 2012-04-16

Using a rubric for a blogging assignment

Earlier this year I gave my 3rd year students an assignment in which they needed to write a reflective blog post based on a clinical experience they’d experienced. I just thought I’d share the rubric I used to grade the assignments, as I’ve come across a few people have have had difficulty trying to assign grades to blog posts. This one below is the best that I could manage but would love to hear if you think there’s anything I could do differently.

Twitter Weekly Updates for 2010-05-31

Posted to Diigo 05/25/2010

    • Turn over grading to the students in the course
    • “It was spectacular, far exceeding my expectations,” she said. “It would take a lot to get me back to a conventional form of grading ever again.”
    • she found that it inspired students to do more work, and more creative work than she sees in courses with traditional grading
    • based on contracts and “crowdsourcing.” First she announced the standards — students had to do all of the work and attend class to earn an A. If they didn’t complete all the assignments, they could get a B or C or worse, based on how many they finished. Students signed a contract to agree to the terms. But students also determined if the assignments (in this case blog posts that were mini-essays on the week’s work) were in fact meeting standards
    • the students each ended up writing about 1,000 words a week, much more than is required for a course to be considered “writing intensive”
    • she said that students took more risks
    • “I think students were going out on a limb more and being creative and not just thinking about ‘What does the teacher want?’ ”
    • While the students are ending up with As, many of them are doing so only because they redid assignments that were judged not sufficient to the task on the first try
    • “No one wanted to get one of those messages” that an assignment needed to be redone. (But when they did receive such notes, the students didn’t complain, as many do about grades they don’t like. They reworked their essays, she said.)
    • the alternative approach to grading in the course didn’t eliminate the teacher’s role, but changed the dynamic from “a single teaching-student interaction to multiple teacher-student/student-student interactions” with students in the roles of both student and teacher
    • “peer pressure is a very influential thing.”
    • “The greatest scam ever pulled off by “vendors” was convincing management that an LMS isn’t just a database. The second biggest? That they really needed one. The third? That it is a “Learning” “Management” System.”
    • “Those organizations (and frankly public learning institutions) that are clinging
      to their standalone learning management systems as a way in which to
      serve up formal ILT course schedules and eLearning are absolutely missing the big picture. Sadly, there are too many organizations like this out there.”
    • “The traditional stand-alone learning management system (LMS) is
      built on an industrial age model. There are two specific problems with this model, first it is
      monolithic within a learning institution and second it is
      generic across learning institutions.
    • there are simpler, cost-effective ways of tracking and reporting usage of content
    • the key point, as mentioned in the earlier Dan Pontefract quote, is that by focusing on an LMS, organisations are missing the big picture
    • adding social functionality into formal courses might go some way to making them more “engaging” to users, but it isn’t addressing the wider “learning” needs of the organisation
    • you simply can’t manage or formalise informal learning; it then just becomes formal, managed learning
    • “Whether you’re in a private or public organization …  start first with a ‘collaboration’ system rather than a ‘learning’ system, and build out from there.”

Twitter Weekly Updates for 2009-12-14

Powered by Twitter Tools

Twitter Weekly Updates for 2009-08-10

Powered by Twitter Tools

Wiki marking rubric

I just finished putting together a grading rubric for a wiki-based assignment that my fourth year students did earlier this year. After I couldn’t find one that suited my needs, I developed my own and thought I’d post it here (see below without formatting) in the meantime. If you have any suggestions to improve on it, please feel free to comment. You can also download the document here (it’s in the OpenDocument format).

Content (40)

The Introduction is clear and informative, giving the reader a good overview of what is to come. The body of the work is comprehensive and generally covers the main topic. The Conclusion sums up the work concisely.

Organization and presentation (20)

Information is clearly arranged and visually appealing so that it is easily viewed. Headings and other formatting options are used effectively to direct the reader. Graphics are well chosen and placed on the page to enhance the message.

Language (10)

A high level of accuracy of spelling, grammar and punctuality is expected. The content is well-written, clear and concise. Appropriate terminology is used to accurately present the information and demonstrates an understanding of the work.

Collaboration (10)

Has contributed to both the assignment and discussion pages, as shown by the history of the wiki. Has made useful suggestions to other group members, as well as the peer review group. Clearly an active contributor to the assignment. Respectful and polite.

References (10)

Reference material is appropriate and relevant, and the in-text citations are correctly formatted (this has nothing to do with the syntax for creating a Reference list). A Reference list is present, although not necessarily correctly formatted.

Peer review of other groups work (5)

Useful, interesting and / or encouraging comments and suggestions are made to the other group. There is engagement with peers and the content that clearly serves to assist the other group in their work.

Information literacy (5)

A variety of links have been used to direct the reader to additional information about the topic, as well as primary sources of content. All hyperlinks work and are relevant. Good use of embedded images to highlight important points and enhance the readers understanding.

Bonus marks (5)

Use of sound and / or video is used to enhance the message and provide another element of understanding and interest. Use of the wiki syntax and other tools demonstrate a deeper understanding of how the technology can be used to enhance collaboration and provide greater meaning to the work.

Additional comments