Category: Assessment
-
AI-supported writing is a validity issue, not a morality issue
Moving beyond debates about ethics and style, this post reframes AI writing in academia as a validity issue. When students use AI for writing, the key question becomes whether we can still make valid assessments of their skills and understanding. This practical framework helps educators determine where AI support helps or hinders educational goals.
-
Get rid of essays
Mark Carrigan (2024). Are We Deluding Ourselves About GAI-proof Assessment? We need to get rid of essays to the greatest extent possible, scaffolding them in new ways when we can’t abandon them without distorting the pedagogical intention of a unit, ideally to be replaced by forms of assessment which are processual in their scope and/or…
-
The hard work of establishing value
Is anyone really considering the real change necessary to respond to AI in higher education?
-
Weekly digest 44
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Podcast: Assessment and swiss cheese (with Phillip Dawson)
https://aipodcast.education/assessment-and-swiss-cheese-phill-dawson-episode-9-of-series-9 This week’s guest is Professor Phillip Dawson, who is Co-Director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia. In addition to Phill’s website, we recommend following Phill on LinkedIn, or Twitter, where shares a lot of his work on the future of assessment…You can find Phill’s research papers…
-
Red-teaming university policy
Use AI to analyse your regulatory policies to identify how adversarial approaches could circumvent what the policy is trying to achieve.
-
Report: Assessment reform for the age of AI
TEQSA’s report “Assessment reform for the age of artificial intelligence” outlines principles and propositions for reforming higher education assessment practices in response to AI. It emphasizes integrating AI ethically, focusing on systemic approaches, learning processes, collaboration, and security. The report aims to guide institutions in adapting to AI while maintaining academic integrity.
-
Focus on designing valid assessments
Assessment validity is more important than cheating in higher education. This post presents a position paper arguing that focusing on valid assessments addresses cheating without moralising. It suggests that anti-cheating measures can sometimes harm validity and inclusion. We should emphasize the importance of ensuring graduates can demonstrate the capabilities our assessments claim to measure, rather…
-
More to AI detection than accuracy
AI text detectors, like OpenAI’s 99.9% accurate tool, aren’t the solution to academic cheating. These detectors have limitations, including model-specific detection and manipulable statistical features. We’re not going to find answers by entering into an arms race with students, by trying to build increasingly accurate AI detectors.
-
Claude, create an interactive website
Claude can create interactive educational websites with minimal prompting. This post demonstrates how Claude generated a physiotherapy website with an SVG image, MCQ, and matching activity. While the content is simple, it showcases the potential for rapid development of personal learning materials using AI.
-
Avoiding the busy-work of AI-generated assessment submissions
Traditional assessments are probably obsolete due to AI (for teachers and students), presenting an opportunity to shift towards a paradigm that evaluates students’ ability to apply knowledge and create impact using AI as a supportive tool, rather than simply memorising information.
-
Using AI is not cheating
It makes no sense to say that ‘using AI is cheating’, unless you know more about the context in which the AI was used. ‘Cheating’ implies that students contravened the rules. So just change the rules.
-
Students completing obsolete assignments
“Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a co-author, or a teammate. They will want to accomplish more than they did before, and also want answers about what AI means for their future learning paths. Schools…
-
Navigating inequity around access to AI in higher education
There’s a lot of anxiety around the potential for student disadvantage due to unequal access to generative AI in education, a concern not unique to AI but prevalent across various aspects. Despite inequalities, there’s a movement towards more democratised AI access, with entities like OpenAI providing free tools. I suggest integrating AI deeply into education…
-
What OpenAI did
https://www.oneusefulthing.org/p/what-openai-did With universal free access, the educational value of AI skyrockets (and that doesn’t count voice and vision, which I will discuss shortly). On the other hand, the Homework Apocalypse will reach its final stages. GPT-4 can do almost all the homework on Earth. And it writes much better than GPT-3.5, with a lot more…
-
How assessment needs to change in response to AI
These are my first tentative thoughts on how assessment will have to change in response to AI. I have a short series of posts that I’m working on regarding this topic and I’d love to hear counter-arguments to help me test this idea. First of all, we have to accept that soon (if not already)…
-
Test what students can do with what they know
In the context of AI that can generate pretty much any content, we need to shift assessment practices away from assessing what students know, and focus on their ability to solve meaningful problems using what they know.
-
Integrating AI in higher education needs a culture shift
Integrating AI into higher education requires a questioning of assumptions, and a re-evaluation of attitudes, behaviours, and beliefs. In short, it requires a change in culture.
-
Don’t track student attendance; design valid assessments
We shouldn’t care that students are sitting in the right seat, at the right time, on the right day. We should only care if they can do the job we say we’re training them for. And that requires validity in our assessments, not attendance in our classrooms.