Category: Assessment
-
Podcast: Assessment and swiss cheese (with Phillip Dawson)
https://aipodcast.education/assessment-and-swiss-cheese-phill-dawson-episode-9-of-series-9 This week’s guest is Professor Phillip Dawson, who is Co-Director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia. In addition to Phill’s website, we recommend following Phill on LinkedIn, or Twitter, where shares a lot of his work on the future of assessment…You can find Phill’s research papers…
-
Red-teaming university policy
Use AI to analyse your regulatory policies to identify how adversarial approaches could circumvent what the policy is trying to achieve.
-
Report: Assessment reform for the age of AI
TEQSA’s report “Assessment reform for the age of artificial intelligence” outlines principles and propositions for reforming higher education assessment practices in response to AI. It emphasizes integrating AI ethically, focusing on systemic approaches, learning processes, collaboration, and security. The report aims to guide institutions in adapting to AI while maintaining academic integrity.
-
Focus on designing valid assessments
Assessment validity is more important than cheating in higher education. This post presents a position paper arguing that focusing on valid assessments addresses cheating without moralising. It suggests that anti-cheating measures can sometimes harm validity and inclusion. We should emphasize the importance of ensuring graduates can demonstrate the capabilities our assessments claim to measure, rather…
-
More to AI detection than accuracy
AI text detectors, like OpenAI’s 99.9% accurate tool, aren’t the solution to academic cheating. These detectors have limitations, including model-specific detection and manipulable statistical features. We’re not going to find answers by entering into an arms race with students, by trying to build increasingly accurate AI detectors.
-
Claude, create an interactive website
Claude can create interactive educational websites with minimal prompting. This post demonstrates how Claude generated a physiotherapy website with an SVG image, MCQ, and matching activity. While the content is simple, it showcases the potential for rapid development of personal learning materials using AI.
-
Avoiding the busy-work of AI-generated assessment submissions
Traditional assessments are probably obsolete due to AI (for teachers and students), presenting an opportunity to shift towards a paradigm that evaluates students’ ability to apply knowledge and create impact using AI as a supportive tool, rather than simply memorising information.
-
Using AI is not cheating
It makes no sense to say that ‘using AI is cheating’, unless you know more about the context in which the AI was used. ‘Cheating’ implies that students contravened the rules. So just change the rules.
-
Students completing obsolete assignments
“Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a co-author, or a teammate. They will want to accomplish more than they did before, and also want answers about what AI means for their future learning paths. Schools…
-
Navigating inequity around access to AI in higher education
There’s a lot of anxiety around the potential for student disadvantage due to unequal access to generative AI in education, a concern not unique to AI but prevalent across various aspects. Despite inequalities, there’s a movement towards more democratised AI access, with entities like OpenAI providing free tools. I suggest integrating AI deeply into education…
-
What OpenAI did
https://www.oneusefulthing.org/p/what-openai-did With universal free access, the educational value of AI skyrockets (and that doesn’t count voice and vision, which I will discuss shortly). On the other hand, the Homework Apocalypse will reach its final stages. GPT-4 can do almost all the homework on Earth. And it writes much better than GPT-3.5, with a lot more…
-
How assessment needs to change in response to AI
These are my first tentative thoughts on how assessment will have to change in response to AI. I have a short series of posts that I’m working on regarding this topic and I’d love to hear counter-arguments to help me test this idea. First of all, we have to accept that soon (if not already)…
-
Test what students can do with what they know
In the context of AI that can generate pretty much any content, we need to shift assessment practices away from assessing what students know, and focus on their ability to solve meaningful problems using what they know.
-
Integrating AI in higher education needs a culture shift
Integrating AI into higher education requires a questioning of assumptions, and a re-evaluation of attitudes, behaviours, and beliefs. In short, it requires a change in culture.
-
Don’t track student attendance; design valid assessments
We shouldn’t care that students are sitting in the right seat, at the right time, on the right day. We should only care if they can do the job we say we’re training them for. And that requires validity in our assessments, not attendance in our classrooms.
-
Stop using AI detection services because they don’t work
The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. Weber-Wulff, et al. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity,…
-
Link: What if Turnitin had pivoted towards AI assessment rather than AI detection?
https://markcarrigan.net/2024/02/07/what-if-turnitin-had-pivoted-towards-ai-assessment-rather-than-ai-detection/ “What AI product did TII try and build? A detector. What if Turnitin had pivoted towards AI assessment rather than AI detection? Or AI analytics? Imagine what else could have possibly be done with the data they have in their systems?“ Interesting question to consider.
-
AI is highlighting the limited value of our assessments
We’re now at the stage where AI can generate decent essays and a different AI system can do a respectable job in marking them. The students and lecturers can then __retire to the cafe and get on with discussing the interesting stuff. Martin Weller (2022). 25+ Years of Ed Tech: 2022 – AI Generated Content.…
-
In Beta podcast: Assessment and learning
http://inbetaphysio.com/2023/06/29/31-assessment-and-learning/ In this conversation, Ben and I had discuss the assessment process, linking it to broader themes of learning, curriculum design, and student experience. We talk about the centralisation of assessment and explore the tensions between institutional control and the autonomy of teachers. We discuss student satisfaction and the influence of risk aversion in educational…