Michael Rowe

Trying to get better at getting better

Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021, May 17). Can AI be used ethically to assist peer review? Impact of Social Sciences.

…an AI tool which screens papers prior to peer review could be used to advise authors to rework their paper before it is sent on for peer review. This might be of particular benefit to authors for whom English is not a first language, for example, and whose work, therefore, may be likely to be adversely affected by first impression bias.

I think it would be very useful for editors and conference organising committees to have a system that does an initial pre-screening of submissions that catches those with superficial problems, redirecting them back to the authors with a note on what areas of the submission can be improved. This is likely to attenuate the “first-impression” bias of reviewers who are prone to reject submissions based on superficial proxy indicators of quality, like poor grammar and formatting, for example.

Full article available here.


Werdmüller, B. (2021, February 2). Generations. Ben Werdmüller.

Imagine if the majority of content was made like this. You set a few key words and the topic, and then a machine learning algorithm finishes off the work, based on what it’s learned from the corpus of data underpinning its decisions, which happens to be the sum total of output on the web. When most content is actually made by machine learning, the robot is learning from other robots; rinse and repeat until, statistically speaking, almost all content derives from robot output, a photocopy of a photocopy of a photocopy of human thought and emotion. Would it be gibberish? I’d like to think so. I’d like to assume that it would lose all sense of meaning and the original topics would fade out, as photocopies of photocopies do as the series goes on. But what if it’s not? What if, as the human fades out, the content makes more sense, and new, more logical structures emerge from the biological static?

A short post from Ben with provocative questions derived by looking at where we are, and extending it logically into the future. Thought-provoking.


Call for chapters: Learning Design Voices (2021). Centre for Innovation in Learning and Teaching, University of Cape Town.

We’re looking for learning designers, academic developers, instructional designers, curriculum designers, learning experience designers, learning experience engineers…  We don’t mind what you call yourself but if you create learning opportunities for students and staff in post-secondary institutions we want to hear from you! We’re keen to create a space for voices on learning design from a wide range of contexts. We invite you to share your practices and experiences, and to connect with a community of people across the globe who also do this work.  We’re hoping that together we can create the kind of book that you reach for when you need a new idea or want to be inspired by the innovative and responsive work of colleagues in challenging and exciting environments.

The set of provocations to stimulate thinking around the book looks interesting.

  • Provocation 1: Learning Design as field, praxis and identity
  • Provocation 2: Humanising Learning Design
  • Provocation 3: Learning activities, processes and materials
  • Provocation 4: Assessment and evaluation online
  • Provocation 5: Policy and regulatory environment

The deadline for submission is the 14th of June 2021 and should include an abstract of about 500 words, explaining your non-dominant perspective, and a one page outline of the chapter structure. I’m thinking of submitting something but have no idea how this will fit alongside other writing projects.


Kahneman, D., Rosenfield, A. M., Gandhi, L., & Blaser, T. (2016, October 1). Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making. Harvard Business Review.

Judgments made by different people are even more likely to diverge. Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate, sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow (my emphasis).

As educators (and disciplinary “experts”) we like to think that our judgements on student performance are objective. As if our decisions are free from noise. I often point out to my students that their grades on clinical placements may be more directly influenced by their assessor’s relationship with their spouse, or by when they last ate something, than by the actual clinical performance.


Featured photo by eberhard 🖐 grossgasteiger on Unsplash.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.