Category: AI
-
Clinical AI scribes and the redistribution of narrative power
Clinical AI scribes redistribute narrative control in medical consultations, creating unresolved tensions between equity and manipulation. The same mechanisms that might help marginalised patients push back against dismissive care could enable strategic gaming of medical records. This technology reveals that clinical documentation was never purely objective and has always been shaped by power.
-
Gaming AI meeting scribes: Why organisational memory needs new governance
AI meeting scribes haven’t introduced new manipulation tactics—they’ve systematized existing ones. Meeting dynamics have always been adversarial: controlling agendas, timing interventions, using particular terminology. What’s changed is these dynamics are now more technical, less visible, more durable, and scalable. The technology didn’t create the problem; it made existing power structures harder to ignore.
-

Moving from ad hoc AI use to systematic integration
AI in FTP processes involves multiple stakeholders using tools episodically and without clear frameworks—creating risks and missed opportunities. Organisations face a fundamental choice: systematic integration with explicit frameworks that strengthen core purposes, or reactive prohibition that drives use underground where learning can’t happen and quality can’t be assured.
-

Context sovereignty – CSP conference
Earlier today I gave the Founder’s Lecture at the Chartered Society of Physiotherapists conference in Newport. I’ve been working on the idea of ‘context sovereignty’ as a way to think differently about our relationship with AI, framing it in positive terms rather than viewing it as a threat to professional identity.
-

From oppression to liberation – PBL2025 conference
Institutional responses to AI—detection software, control policies—reveal that education has always measured proxies for learning rather than learning itself. PBL’s foundational commitments to agency, collaborative knowledge construction, and authentic problems position it to respond differently, enabling students to maintain control over meaning through context sovereignty while developing evaluative judgement about what deserves to exist.
-
Learning to use AI effectively takes time, not technique
People who’ve ‘dabbled’ with ChatGPT or Claude often confidently declare that the outputs are “hollow” or that they “lack substance”. But learning to use AI effectively isn’t about mastering a tool—it’s about developing relational skill. And relationships take time. When has anything worth doing ever been easy? And why should AI be different?
-

Podcast: AI in physiotherapy practice
In this episode of PT Pro Talk, I speak to Mariana Hannah Parks on the impact of AI on physiotherapy practice, from clinical reasoning to how we learn, communicate, and make decisions. We explore how AI can serve as a thought partner, helping therapists reflect on their own practices, identify biases, and explore new perspectives…
-
AI and Fitness to Practice in Nursing
AI tools are already embedded in nursing education, but their use in fitness to practice processes raises profound questions about professional judgement, equity, and authenticity that blanket policies cannot adequately address. Instead of avoiding this messiness, we need to work out how to use these tools in ways that actually serve students, even when that…
-

AI in clinical practice – Lincolnshire AHP conference
Earlier today I gave a presentation on generative AI in healthcare at the Lincolnshire AHP conference, focusing on the practical implications of the technology for clinicians. The presentation covered how generative AI works, its current capabilities in the context of clinical practice, and the challenges healthcare systems face in adoption.
-
AI and judgement: Cultivating taste in an age of capability
Content creation is trivially easy now. Curation—selecting what to make—is also becoming easier as AI learns your patterns. What remains is taste: evaluative judgement about what should exist in the first place. AI can be descriptive but not evaluative. It can learn your preferences but cannot judge whether they’re worth amplifying. That’s your responsibility.
-
A better game: Choosing what to amplify with AI
I keep seeing posts cataloguing AI’s failures and questioning tech companies’ motives. That’s one way to engage. Here’s another: demonstrate thoughtful use, critique from practice, and amplify what matters to you. The question is what you choose to amplify as a practical alternative to performative critique.
-
Performative compliance and other behavioural issues of language models
Language models exhibit specific behavioural patterns that create friction in daily use, distinct from fundamental issues like bias or hallucination. This short guide catalogues some of the behavioural issues I encounter, including sycophancy, performative compliance, and context drift. I also explain why the behaviour is problematic and suggest practical workarounds for interacting more effectively with…
-

AI and the business of practice – Lincolnshire Practice Management Conference
Rather than viewing AI as either technological salvation or existential threat, practice managers need frameworks for thoughtful integration of this technology into practice contexts. This means starting with administrative tasks, building staff confidence through demonstration, and maintaining clear ethical boundaries. The goal isn’t wholesale transformation but strategic enhancement of existing workflows.
-
[Link] Reflections on the proliferation, use and misuse of (generative) AI
Cheating is a social problem. We should not be trying to use technology to solve a social problem.
-
AI in Research and Assessment – University of Gibraltar
Recently, I had the opportunity to speak with faculty and PhD students at the University of Gibraltar, on the topic of changing our relationship with AI in higher education. Rather than fighting against AI use, we need to embrace it—helping faculty design authentic assessments that evaluate how well students collaborate with AI, and teaching PhD…
-
Context Sovereignty in AI and learning – AMEE AI symposium
Current AI chatbots can’t access your persistent knowledge structures, forcing repetitive prompting and limiting meaningful learning. Context sovereignty changes this by letting you maintain control over your personal learning data while using AI to amplifie your intent. Rather than asking “what can AI do?” we should ask “what context do I bring to shape AI’s…
-
Making sense of context engineering
GraphRAG and knowledge graphs are to context engineering, what RAG and vector databases are to prompt engineering.
-

Physiopedia Plus AI Masterclass and discount code
The Physiopedia Plus AI Masterclass for Healthcare Professionals is a short course (about 9 hours) aimed at anyone working in health and social care. It includes overviews of the state of the art in generative AI, as well as exploring the role of AI in education, research, and clinical practice. It’s only available to Physiopedia…
-
[Essay] Context sovereignty, engineering, and personal learning
Context sovereignty is the idea that learners should control the personal context that informs AI-supported learning. Instead of constantly explaining your background to AI, AI should develop persistent understanding of your knowledge and thinking patterns. And context engineering provides the practical framework to achieve this. In this essay I explore both concepts, explaining why they…
-
Podcast – Countdown to superintelligence
in AISam Harris speaks with Daniel Kokotajlo about the potential impacts of superintelligent AI over the next decade. They discuss Daniel’s predictions in his essay “AI 2027,” the alignment problem, what an intelligence explosion might look like, the capacity of LLMs to intentionally deceive, the economic implications of recent advances in AI, AI safety testing, the…