Michael Rowe

Trying to get better at getting better

Earlier today I gave two presentations at the University of Gibraltar, one for faculty on the implications of AI on assessment in higher education, and another for PhD students on the topic of AI and it’s use in the research process.

I want to thank Darren Fa and Leon Leanse for inviting me in the first place, and for facilitating the visit to make sure the trip went smoothly.


Here’s a simple overview of the sessions:

AI and Assessment

  • Assessment paradigm shift required: Traditional “AI-proof” assessment designs are failing, with many AI-generated submissions going undetected while receiving higher grades than human work, necessitating a move from detecting AI use to assessing how well students collaborate with AI.
  • Structural over discursive changes: Rather than relying on unenforceable rules and instructions, assessment should be redesigned with structural changes that build validity into the assessment architecture and mechanics, focusing on authentic, real-world problem-solving tasks.
  • Process documentation over product evaluation: Effective AI-integrated assessment emphasises documenting the collaboration process, requiring students to show their prompts, evaluation methods, and metacognitive reflection rather than just submitting final outputs.
  • AI as learning partner, not threat: Policies should treat AI literacy as a professional competency to develop, encouraging students to document their AI collaboration while building evaluative judgement skills for assessing both AI outputs and processes.
  • Adaptive capacity building: The goal is developing transferable thinking skills and decision-making frameworks for AI collaboration rather than rigid rules, preparing students for unknown future technologies through principle-based approaches.

AI and Research

  • AI enhances research ideation and design: Studies show AI-generated research ideas are judged as more novel than human expert ideas, and AI can assist with research design decisions, methodology selection, and experimental planning through structured prompting and expert simulation.
  • Comprehensive research process support: AI assists across the entire research pipeline—from literature review and paper summarisation to qualitative data analysis, quantitative statistical analysis, and writing/editing—though outputs require careful evaluation and oversight.
  • Reading and literature analysis transformation: AI can summarise papers, extract context-specific information, and help navigate complex academic literature, though this raises concerns about potentially “dumbing down” information and reducing the cognitive effort of engaging with difficult ideas.
  • Collaborative responsibility model: Rather than viewing AI as infallible, researchers should “treat AI like a person”—recognising that AI systems can be biased, confused, and forgetful (like humans), but can still be valuable collaborators when outputs are evaluated with appropriate care.
  • Developing evaluative “taste”: Successful AI integration requires cultivating judgement skills for assessing AI quality and appropriateness, moving beyond rigid rules toward adaptive capacity-building that prepares researchers for unknown future technologies and evolving AI capabilities.

Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.