Category: Assessment
-
AI in Research and Assessment – University of Gibraltar
Recently, I had the opportunity to speak with faculty and PhD students at the University of Gibraltar, on the topic of changing our relationship with AI in higher education. Rather than fighting against AI use, we need to embrace it—helping faculty design authentic assessments that evaluate how well students collaborate with AI, and teaching PhD…
-

Reimagining HPE with AI – Council of Deans of Health
When students use AI to bypass meaningful learning, they show that our assignments were already completable without real engagement. Using AI to optimise for grades over understanding makes this strategic behaviour more visible. The real issue isn’t the technology—it’s the misaligned incentives that reward compliance over authentic learning.
-

AI in physiotherapy education – Canadian Physiotherapy Association
This workshop introduces clinical instructors to state-of-the-art generative AI applications in physiotherapy education, with a focus on practical implementation strategies for educational content development. Through interactive demonstrations and guided exercises, participants will explore how AI tools can enhance their teaching practice while maintaining high educational and clinical standards. The workshop addresses three key areas: content…
-
The role of prompt design in AI-enhanced feedback literacy
Research on AI-enabled feedback rarely documents the specific prompts used, focusing instead on models and outcomes. This matters because prompt design reflects and develops feedback literacy. Well-crafted prompts that incorporate assessment criteria and theoretical frameworks represent a pedagogical opportunity. By teaching students to create sophisticated prompts, we’re cultivating the same feedback literacy skills that effective…
-
AI-supported writing is a validity issue, not a morality issue
Moving beyond debates about ethics and style, this post reframes AI writing in academia as a validity issue. When students use AI for writing, the key question becomes whether we can still make valid assessments of their skills and understanding. This practical framework helps educators determine where AI support helps or hinders educational goals.
-
The hard work of establishing value
Is anyone really considering the real change necessary to respond to AI in higher education?
-
Red-teaming university policy
Use AI to analyse your regulatory policies to identify how adversarial approaches could circumvent what the policy is trying to achieve.
-
Report: Assessment reform for the age of AI
TEQSA’s report “Assessment reform for the age of artificial intelligence” outlines principles and propositions for reforming higher education assessment practices in response to AI. It emphasizes integrating AI ethically, focusing on systemic approaches, learning processes, collaboration, and security. The report aims to guide institutions in adapting to AI while maintaining academic integrity.
-
Focus on designing valid assessments
Assessment validity is more important than cheating in higher education. This post presents a position paper arguing that focusing on valid assessments addresses cheating without moralising. It suggests that anti-cheating measures can sometimes harm validity and inclusion. We should emphasize the importance of ensuring graduates can demonstrate the capabilities our assessments claim to measure, rather…
-
More to AI detection than accuracy
AI text detectors, like OpenAI’s 99.9% accurate tool, aren’t the solution to academic cheating. These detectors have limitations, including model-specific detection and manipulable statistical features. We’re not going to find answers by entering into an arms race with students, by trying to build increasingly accurate AI detectors.
-
Claude, create an interactive website
Claude can create interactive educational websites with minimal prompting. This post demonstrates how Claude generated a physiotherapy website with an SVG image, MCQ, and matching activity. While the content is simple, it showcases the potential for rapid development of personal learning materials using AI.
-
Avoiding the busy-work of AI-generated assessment submissions
Traditional assessments are probably obsolete due to AI (for teachers and students), presenting an opportunity to shift towards a paradigm that evaluates students’ ability to apply knowledge and create impact using AI as a supportive tool, rather than simply memorising information.
-
Using AI is not cheating
It makes no sense to say that ‘using AI is cheating’, unless you know more about the context in which the AI was used. ‘Cheating’ implies that students contravened the rules. So just change the rules.
-
Navigating inequity around access to AI in higher education
There’s a lot of anxiety around the potential for student disadvantage due to unequal access to generative AI in education, a concern not unique to AI but prevalent across various aspects. Despite inequalities, there’s a movement towards more democratised AI access, with entities like OpenAI providing free tools. I suggest integrating AI deeply into education…
-
What OpenAI did
https://www.oneusefulthing.org/p/what-openai-did With universal free access, the educational value of AI skyrockets (and that doesn’t count voice and vision, which I will discuss shortly). On the other hand, the Homework Apocalypse will reach its final stages. GPT-4 can do almost all the homework on Earth. And it writes much better than GPT-3.5, with a lot more…
-
How assessment needs to change in response to AI
These are my first tentative thoughts on how assessment will have to change in response to AI. I have a short series of posts that I’m working on regarding this topic and I’d love to hear counter-arguments to help me test this idea. First of all, we have to accept that soon (if not already)…
-
Test what students can do with what they know
In the context of AI that can generate pretty much any content, we need to shift assessment practices away from assessing what students know, and focus on their ability to solve meaningful problems using what they know.
-
Integrating AI in higher education needs a culture shift
Integrating AI into higher education requires a questioning of assumptions, and a re-evaluation of attitudes, behaviours, and beliefs. In short, it requires a change in culture.
-
Don’t track student attendance; design valid assessments
We shouldn’t care that students are sitting in the right seat, at the right time, on the right day. We should only care if they can do the job we say we’re training them for. And that requires validity in our assessments, not attendance in our classrooms.
-
Stop using AI detection services because they don’t work
The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. Weber-Wulff, et al. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity,…