Michael Rowe

Trying to get better at getting better

AI-supported writing is a validity issue, not a morality issue

See Dawson, P., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 1–12.


There’s a lot of debate about AI-generated writing in academia. Most of it focuses on whether it’s cheating, whether it’s ethical, or whether the writing is too “sterile.” But I think we’re missing the point. The real issue isn’t about morality or style – it’s about validity.

Validity refers to our ability to make accurate inferences about what we’re trying to measure. In educational assessment, it’s about whether we can confidently say, “Based on this evidence, the student has demonstrated this skill or understanding.”

When a student submits an AI-generated piece of work in place of a personal reflection, the problem isn’t that using AI is “wrong” or that the writing lacks personality. The problem is that we can no longer validly assess what we need to assess – in this case, the student’s personal growth and metacognition.

A validity framework shifts the conversation from moral judgement (“AI writing is cheating”) and issues of style (“AI writing is sterile and impersonal”) to a more practical assessment: “Does this output provide valid evidence of what we’re trying to measure or achieve?”

  • If a student uses AI to write a reflection, the problem isn’t the AI itself, but that we can no longer validly assess their personal growth and understanding.
  • If a student uses AI to help organise research notes for a literature review, validity might not be compromised – we can still assess their synthesis and analysis in the final paper.
  • When a student uses AI to check their lab report formatting, validity remains intact – we can still assess their experimental methods and scientific thinking.

When we frame AI-supported writing as a validity problem rather than a moral one, the path forward becomes clearer. We can focus on designing assessments that maintain validity regardless of AI use, rather than fighting an unwinnable battle against AI adoption. This shift from moral judgement or stylistic preference to validity assessment gives us a more practical way forward, moving from abstract debates to concrete solutions.

The question isn’t whether AI writing is good or bad, right or wrong. The question is: can we still validly assess what we need to assess?


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.


Comments

2 responses to “AI-supported writing is a validity issue, not a morality issue”

  1. Michael Rowe avatar
    Michael Rowe

    We tend to get caught up in the emotions around morality when it comes to things we don’t like. This helps sidestep that process.

  2. Stephen Bestbier avatar
    Stephen Bestbier

    Great point and keeps the argument regarding the use of AI in academic writing simple.