The Generative Conversations project grew out of the planning phase of a workshop presented at the 2024 IFOMPT conference in Basel. The integration of generative AI into physiotherapy practice, education, and research is not an abstract idea for the future. It’s happening now, and how the profession responds to this reality will shape both technological adoption and professional practice for years to come.
The discussion document we created emerged from extensive consultation with physiotherapy professionals across 16 countries, conducted before, during, and after the 2024 IFOMPT conference. Rather than speculating about future possibilities, the focus remains deliberately on current AI capabilities—what these systems can and cannot do today. The document therefore aims to stimulate discussion and debate in health professions organisations and teams, trying to present a neutral perspective based on existing AI capabilities, and informed by a series of engagements with diverse stakeholders.
We split the output and discussion into 3 professional domains: practice, education, and research. For each domain, we explored the opportunities, and risks, and suggested strategies for moving forward, including a selection of questions to guide discussion and debate. We also split out some of the discussion into a set of guidance for different stakeholders.
I’ve provided a short summary of each domain below, but if you’re interested in any of this, consider browsing the full project at Generative Conversations in Physiotherapy.
Clinical practice: Weighing opportunities against risks
In clinical settings, AI presents several interesting possibilities. These systems can analyse patient data to identify patterns that might not be immediately apparent, offer alternative perspectives during clinical reasoning, and assist in developing personalised treatment approaches. The potential for improved patient outcomes deserves serious consideration. Administrative applications offer more immediate benefits. AI can streamline documentation, generate patient education materials, and assist with care coordination—potentially returning significant time to direct patient care.
However, the risks require equal attention. Over-reliance on AI recommendations could erode the clinical intuition and hands-on expertise that characterise skilled practice. There’s legitimate concern about practitioners becoming dependent on algorithmic suggestions rather than developing robust clinical reasoning capabilities. Data privacy presents another significant challenge. Patient information shared with AI platforms raises questions about security and confidentiality that current regulatory frameworks struggle to address adequately. The therapeutic relationship—central to effective physiotherapy—could suffer if technology becomes a barrier between practitioner and patient.
Professional liability represents perhaps the most complex challenge. When AI-assisted decisions contribute to adverse outcomes, existing frameworks cannot clearly assign responsibility. This creates uncertainty that the profession will need to address as adoption increases.
Education: Promise and complexity
AI’s potential impact on physiotherapy education appears substantial. Personalised learning pathways, sophisticated clinical simulations, and intelligent tutoring systems could address longstanding challenges in how we prepare practitioners for complex clinical environments. Students might experience rare clinical presentations safely, receive immediate feedback on clinical reasoning, and develop research capabilities with AI assistance. These possibilities warrant serious exploration.
Yet educators raise legitimate concerns about academic integrity when students have access to tools that can generate sophisticated responses to traditional assessments. The risk of reduced hands-on practice presents another challenge—clinical skills require embodied knowledge that digital systems cannot fully replicate. Perhaps most concerning is the potential for increased educational inequality. Well-resourced institutions may gain access to advanced AI capabilities while others lag behind, potentially creating disparities in educational quality based on technological access rather than pedagogical excellence.
Research: Acceleration and integrity
For researchers, AI offers remarkable capabilities that could significantly accelerate knowledge creation. Literature analysis that currently requires weeks can be completed in hours. Pattern recognition in complex datasets might reveal insights that human analysis could miss. AI can facilitate collaboration across linguistic and geographic boundaries. Early adopters report substantial time savings and enhanced analytical capabilities. These benefits suggest AI could democratise research participation and accelerate scientific progress.
However, challenges to research integrity are equally significant. The “black box” nature of AI decision-making can undermine reproducibility—a cornerstone of scientific validity. If researchers cannot fully explain how AI systems reached particular conclusions, verification and replication become problematic. AI systems can perpetuate biases present in their training data, potentially skewing research findings in ways that reinforce existing inequalities. This presents particular concerns for a profession committed to equitable care across diverse populations.
There’s also the question of skill development among early-career researchers. If novice researchers become dependent on AI tools before developing strong foundational capabilities, the profession might lose analytical skills necessary for advancing knowledge.
Potential strategies for moving forward
Rather than prescriptive solutions, several strategic considerations emerge from the analysis:
- AI literacy appears increasingly important for physiotherapy professionals. This doesn’t require programming skills, but rather the ability to critically evaluate AI outputs, understand system limitations, recognise potential biases, and discern when human judgement should prevail over algorithmic suggestions.
- Ethical frameworks may need development to address patient consent for AI use, data security standards, and transparency requirements. These frameworks would likely require regular updating as technological capabilities evolve.
- Human-AI collaboration models warrant exploration to determine when AI assistance enhances practice versus when human expertise should predominate. This requires understanding both AI capabilities and human cognition.
- Equity considerations deserve attention to ensure AI benefits don’t accrue only to well-resourced settings while leaving others behind. This includes considering how different patient populations might be affected by AI integration.
Looking ahead
The physiotherapy profession faces important choices about how to engage with AI integration. These decisions will influence not only how the profession adapts to technological change, but also how effectively these tools serve the core purpose of helping people move and function better. The goal might be to achieve AI integration that enhances rather than undermines the therapeutic relationships and clinical expertise that define effective practice. This would require ongoing dialogue, critical reflection, and careful attention to both opportunities and risks.
The generative conversations project is a starting point for this exploration. How the profession’s relationship with AI evolves will depend on the collective choices made by practitioners, educators, researchers, and patients about how these tools should be developed and used. Success likely depends on balancing technological capabilities with preservation of essential human elements in physiotherapy practice. This balance won’t be achieved automatically—it will require deliberate attention and ongoing adjustment as both technology and professional understanding continue to evolve.
Learn more at the Generative Conversations project.