See this brief post on my reasons for sharing rejections.
Introduction
Identity is central to our understanding of the health professions, and much of professional
education revolves around this core value. The introduction of artificially intelligent tools (AI-based systems) into clinical practice has led to resistance in the face of perceived threats to clinician autonomy (Jussupow et al., 2018), which is foundational to identity. If these systems are shown to improve patient outcomes, but are resisted by healthcare professionals, we will need to adapt our curricula to better prepare graduates to work alongside AI-based systems. This study therefore aimed to explore the perceptions of physiotherapy clinicians on how their practice might be influenced by the introduction of AI.
Methods
This study used a cross-sectional design to survey an international sample of physiotherapy clinicians with an online questionnaire consisting of open-ended questions. The questionnaire was piloted among a diverse sample of physiotherapists with an interest in AI, identified through the researcher’s professional network. Participants did not need a detailed understanding of AI-based systems as each question included definitions and clinical use cases as examples of the technology. Responses were received from 59 clinicians in 25 countries, across a wide range of clinical specialities, levels of experience, and professional qualifications. Participant responses were analysed qualitatively.
Results
Almost all participants reported no concerns with the implementation of AI-based systems for administration and other forms of similar work. Significantly fewer participants were comfortable with the use of AI-based systems for professional tasks that were ‘closer’ to the patient. For example, assessment through the use of video-analysis, or interview transcription and interpretation, were considered to be acceptable, given certain caveats and qualifications. Almost no participants felt that it would be acceptable for an AI-based system to make a clinical judgement or to be in physical contact with the patient. When describing AI-patient interactions that were closer to the patient, participants appealed to an amorphous variable – categorised as a ‘human factor’ in the analysis – as an explanation for why AI-based systems could not conduct them.
Discussion and Conclusion
Participants in this study had few concerns with AI-based systems completing administrative tasks, with the level of concern rising as the system moved closer to the patient interface. In other words, as AI-based systems were perceived to transgress the boundaries of professional practice and move closer to the patient, resistance to the technology increased. Trust in AI-based systems changed as a function of the distance over which the technology acted on the patient. Stated in another way, resistance to AI-based systems increased as the perceived threat to professional identity increased. In this study, participants appeared to deal with this threat by denying its existence. The collective response could be summarised as, “AI-based systems cannot get too close to the patient because they are not human, and the therapeutic relationship is premised on a human factor that machines will never have”. This line of reasoning is not a useful strategy for dealing with AI-based systems, as it is premised on a range of tenuous assumptions that may be inaccurate (Rowe, et al., 2021). While it is not yet clear that AI represents a threat to professional identity, it seems reasonable to pay it attention nonetheless. Rather than appeal to the notion that machines are not human and therefore have no part to play in patient interactions, we should consider a response that does not rely on certain tasks remaining forever closed to machines. What this response should be, is something that curriculum developers will need to consider. If AI-based systems keep increasing in competence, taking on more of the tasks regarded as central to professional practice, we will find ourselves having to ask difficult questions around what it really means to be a healthcare professional. The introduction of AI-based systems into clinical practice is nothing less than an exploration of professional identity that forces us to ask who we are and what we value.
References
- Jussupow, E., Heinzl, A., & Spohrer, K. (2018). I am; We are—Conceptualizing Professional Identity Threats from Information Technology. Thirty Ninth International Conference on Information Systems.
- Rowe, M., Nicholls, D. A., & Shaw, J. (2022). How to replace a physiotherapist: Artificial intelligence and the redistribution of expertise. Physiotherapy Theory and Practice, 38:13, 2275-2283. DOI: https://doi.org/10.1080/09593985.2021.19349
Comments
One response to “Rejected AMEE abstract (oral presentation) | Is ‘being human’ enough? Preparing for clinical practice in the age of artificial intelligence”
[…] Oral presentation | Is ‘being human’ enough? Preparing for clinical practice in the age of artificial intell…. […]