Author: Michael Rowe
-
Clinical AI scribes and the redistribution of narrative power
Clinical AI scribes redistribute narrative control in medical consultations, creating unresolved tensions between equity and manipulation. The same mechanisms that might help marginalised patients push back against dismissive care could enable strategic gaming of medical records. This technology reveals that clinical documentation was never purely objective and has always been shaped by power.
-
Gaming AI meeting scribes: Why organisational memory needs new governance
AI meeting scribes haven’t introduced new manipulation tactics—they’ve systematized existing ones. Meeting dynamics have always been adversarial: controlling agendas, timing interventions, using particular terminology. What’s changed is these dynamics are now more technical, less visible, more durable, and scalable. The technology didn’t create the problem; it made existing power structures harder to ignore.
-

Moving from ad hoc AI use to systematic integration
AI in FTP processes involves multiple stakeholders using tools episodically and without clear frameworks—creating risks and missed opportunities. Organisations face a fundamental choice: systematic integration with explicit frameworks that strengthen core purposes, or reactive prohibition that drives use underground where learning can’t happen and quality can’t be assured.
-

Context sovereignty – CSP conference
Earlier today I gave the Founder’s Lecture at the Chartered Society of Physiotherapists conference in Newport. I’ve been working on the idea of ‘context sovereignty’ as a way to think differently about our relationship with AI, framing it in positive terms rather than viewing it as a threat to professional identity.
-

From oppression to liberation – PBL2025 conference
Institutional responses to AI—detection software, control policies—reveal that education has always measured proxies for learning rather than learning itself. PBL’s foundational commitments to agency, collaborative knowledge construction, and authentic problems position it to respond differently, enabling students to maintain control over meaning through context sovereignty while developing evaluative judgement about what deserves to exist.
-
Learning to use AI effectively takes time, not technique
People who’ve ‘dabbled’ with ChatGPT or Claude often confidently declare that the outputs are “hollow” or that they “lack substance”. But learning to use AI effectively isn’t about mastering a tool—it’s about developing relational skill. And relationships take time. When has anything worth doing ever been easy? And why should AI be different?
-

Podcast: AI in physiotherapy practice
In this episode of PT Pro Talk, I speak to Mariana Hannah Parks on the impact of AI on physiotherapy practice, from clinical reasoning to how we learn, communicate, and make decisions. We explore how AI can serve as a thought partner, helping therapists reflect on their own practices, identify biases, and explore new perspectives…
-
AI and Fitness to Practice in Nursing
AI tools are already embedded in nursing education, but their use in fitness to practice processes raises profound questions about professional judgement, equity, and authenticity that blanket policies cannot adequately address. Instead of avoiding this messiness, we need to work out how to use these tools in ways that actually serve students, even when that…
-

AI in clinical practice – Lincolnshire AHP conference
Earlier today I gave a presentation on generative AI in healthcare at the Lincolnshire AHP conference, focusing on the practical implications of the technology for clinicians. The presentation covered how generative AI works, its current capabilities in the context of clinical practice, and the challenges healthcare systems face in adoption.
-
AI and judgement: Cultivating taste in an age of capability
Content creation is trivially easy now. Curation—selecting what to make—is also becoming easier as AI learns your patterns. What remains is taste: evaluative judgement about what should exist in the first place. AI can be descriptive but not evaluative. It can learn your preferences but cannot judge whether they’re worth amplifying. That’s your responsibility.
-
A better game: Choosing what to amplify with AI
I keep seeing posts cataloguing AI’s failures and questioning tech companies’ motives. That’s one way to engage. Here’s another: demonstrate thoughtful use, critique from practice, and amplify what matters to you. The question is what you choose to amplify as a practical alternative to performative critique.
-
Performative compliance and other behavioural issues of language models
Language models exhibit specific behavioural patterns that create friction in daily use, distinct from fundamental issues like bias or hallucination. This short guide catalogues some of the behavioural issues I encounter, including sycophancy, performative compliance, and context drift. I also explain why the behaviour is problematic and suggest practical workarounds for interacting more effectively with…
-

My book on scholarship as a commons
We face increasingly complex challenges yet have made systematic thinking tools exclusive to academic institutions. This creates artificial scarcity when we need broader intellectual engagement. Scholarship should function as intellectual commons—shared infrastructure enabling thoughtful navigation of uncertainty, complexity, and ambiguity for everyone, not just credentialed experts. This book explores what that might look like.
-

AI and the business of practice – Lincolnshire Practice Management Conference
Rather than viewing AI as either technological salvation or existential threat, practice managers need frameworks for thoughtful integration of this technology into practice contexts. This means starting with administrative tasks, building staff confidence through demonstration, and maintaining clear ethical boundaries. The goal isn’t wholesale transformation but strategic enhancement of existing workflows.
-
[Link] Environmental impact of delivering AI at Google scale
“Google’s software efficiency efforts and clean energy procurement have driven a 33x reduction in energy consumption and a 44x reduction in carbon footprint for the median Gemini Apps text prompt over one year. We identify that the median Gemini Apps text prompt uses less energy than watching nine seconds of television (0.24 Wh) and consumes…
-

The Rock of Gibraltar
Yesterday I managed to spend most of the day exploring in and around the Rock of Gibraltar. To be honest, I was expecting a bit of a walk around a nature reserve but it’s incredible how much more there is to see. I’m heading back to the UK later this morning.
-
[Link] Reflections on the proliferation, use and misuse of (generative) AI
Cheating is a social problem. We should not be trying to use technology to solve a social problem.
-
AI in Research and Assessment – University of Gibraltar
Recently, I had the opportunity to speak with faculty and PhD students at the University of Gibraltar, on the topic of changing our relationship with AI in higher education. Rather than fighting against AI use, we need to embrace it—helping faculty design authentic assessments that evaluate how well students collaborate with AI, and teaching PhD…
-

Camping in Wales
in PhotographyEarlier in the summer break we went camping near Cardigan in Wales. It was our second time in Wales (here’s the first) and we had a brilliant time with lots of exploring along coastal paths and beaches. Here are some photos.
-

Camping in the North York Moors
A few weeks ago we went camping near Scarborough in the North York Moors National Park.