Category: AI
-
Clinical AI scribes and the redistribution of narrative power
Clinical AI scribes redistribute narrative control in medical consultations, creating unresolved tensions between equity and manipulation. The same mechanisms that might help marginalised patients push back against dismissive care could enable strategic gaming of medical records. This technology reveals that clinical documentation was never purely objective and has always been shaped by power.
-
Gaming AI meeting scribes: Why organisational memory needs new governance
AI meeting scribes haven’t introduced new manipulation tactics—they’ve systematized existing ones. Meeting dynamics have always been adversarial: controlling agendas, timing interventions, using particular terminology. What’s changed is these dynamics are now more technical, less visible, more durable, and scalable. The technology didn’t create the problem; it made existing power structures harder to ignore.
-

Moving from ad hoc AI use to systematic integration
AI in FTP processes involves multiple stakeholders using tools episodically and without clear frameworks—creating risks and missed opportunities. Organisations face a fundamental choice: systematic integration with explicit frameworks that strengthen core purposes, or reactive prohibition that drives use underground where learning can’t happen and quality can’t be assured.
-

Context sovereignty – CSP conference
Earlier today I gave the Founder’s Lecture at the Chartered Society of Physiotherapists conference in Newport. I’ve been working on the idea of ‘context sovereignty’ as a way to think differently about our relationship with AI, framing it in positive terms rather than viewing it as a threat to professional identity.
-

From oppression to liberation – PBL2025 conference
Institutional responses to AI—detection software, control policies—reveal that education has always measured proxies for learning rather than learning itself. PBL’s foundational commitments to agency, collaborative knowledge construction, and authentic problems position it to respond differently, enabling students to maintain control over meaning through context sovereignty while developing evaluative judgement about what deserves to exist.
-
Learning to use AI effectively takes time, not technique
People who’ve ‘dabbled’ with ChatGPT or Claude often confidently declare that the outputs are “hollow” or that they “lack substance”. But learning to use AI effectively isn’t about mastering a tool—it’s about developing relational skill. And relationships take time. When has anything worth doing ever been easy? And why should AI be different?
-

Podcast: AI in physiotherapy practice
In this episode of PT Pro Talk, I speak to Mariana Hannah Parks on the impact of AI on physiotherapy practice, from clinical reasoning to how we learn, communicate, and make decisions. We explore how AI can serve as a thought partner, helping therapists reflect on their own practices, identify biases, and explore new perspectives…
-
AI and Fitness to Practice in Nursing
AI tools are already embedded in nursing education, but their use in fitness to practice processes raises profound questions about professional judgement, equity, and authenticity that blanket policies cannot adequately address. Instead of avoiding this messiness, we need to work out how to use these tools in ways that actually serve students, even when that…
-

AI in clinical practice – Lincolnshire AHP conference
Earlier today I gave a presentation on generative AI in healthcare at the Lincolnshire AHP conference, focusing on the practical implications of the technology for clinicians. The presentation covered how generative AI works, its current capabilities in the context of clinical practice, and the challenges healthcare systems face in adoption.
-
AI and judgement: Cultivating taste in an age of capability
Content creation is trivially easy now. Curation—selecting what to make—is also becoming easier as AI learns your patterns. What remains is taste: evaluative judgement about what should exist in the first place. AI can be descriptive but not evaluative. It can learn your preferences but cannot judge whether they’re worth amplifying. That’s your responsibility.
-
A better game: Choosing what to amplify with AI
I keep seeing posts cataloguing AI’s failures and questioning tech companies’ motives. That’s one way to engage. Here’s another: demonstrate thoughtful use, critique from practice, and amplify what matters to you. The question is what you choose to amplify as a practical alternative to performative critique.
-
Performative compliance and other behavioural issues of language models
Language models exhibit specific behavioural patterns that create friction in daily use, distinct from fundamental issues like bias or hallucination. This short guide catalogues some of the behavioural issues I encounter, including sycophancy, performative compliance, and context drift. I also explain why the behaviour is problematic and suggest practical workarounds for interacting more effectively with…
-

AI and the business of practice – Lincolnshire Practice Management Conference
Rather than viewing AI as either technological salvation or existential threat, practice managers need frameworks for thoughtful integration of this technology into practice contexts. This means starting with administrative tasks, building staff confidence through demonstration, and maintaining clear ethical boundaries. The goal isn’t wholesale transformation but strategic enhancement of existing workflows.
-
[Link] Environmental impact of delivering AI at Google scale
“Google’s software efficiency efforts and clean energy procurement have driven a 33x reduction in energy consumption and a 44x reduction in carbon footprint for the median Gemini Apps text prompt over one year. We identify that the median Gemini Apps text prompt uses less energy than watching nine seconds of television (0.24 Wh) and consumes…
-
[Link] Reflections on the proliferation, use and misuse of (generative) AI
Cheating is a social problem. We should not be trying to use technology to solve a social problem.
-
AI in Research and Assessment – University of Gibraltar
Recently, I had the opportunity to speak with faculty and PhD students at the University of Gibraltar, on the topic of changing our relationship with AI in higher education. Rather than fighting against AI use, we need to embrace it—helping faculty design authentic assessments that evaluate how well students collaborate with AI, and teaching PhD…
-
![[Link] Clinical prompting resources](https://www.mrowe.co.za/blog/wp-content/uploads/2025/08/Screenshot-From-2025-08-25-05-53-34.png)
[Link] Clinical prompting resources
Link to a set of resources aimed at helping medical students practice using AI to support learning.
-

Context Sovereignty in AI and learning – AMEE AI symposium
Current AI chatbots can’t access your persistent knowledge structures, forcing repetitive prompting and limiting meaningful learning. Context sovereignty changes this by letting you maintain control over your personal learning data while using AI to amplifie your intent. Rather than asking “what can AI do?” we should ask “what context do I bring to shape AI’s…
-
[Link] Changes Coming to Higher Ed
https://hybridhorizons.substack.com/p/changes-coming-to-higher-ed The institutions that thrive won’t be the ones that resist everything or buy everything. They’ll be the ones that choose carefully, show their working and keep people at the centre. Interesting ideas.
-
[Link] Academia: The Questions Are Big! It’s the Curricula That Got Small.
https://timothyburke.substack.com/p/academia-the-questions-are-big-its …many of the people trying to sell higher education on AI aren’t trying to sell a revolutionary redesign of education itself, just a way of making its dysfunctions cheaper and more efficient. Wonderful essay. Well worth reading.