The results were comparable to or exceeded clinical standards, with an overall success rate of 87% for the 31 participants whose blood was drawn. For the 25 people whose veins were easy to access, the success rate was 97%. The device includes an ultrasound image-guided robot that draws blood from veins. A fully integrated device, which includes a module that handles samples and a centrifuge-based blood analyzer, could be used at bedsides and in ambulances, emergency rooms, clinics, doctors’ offices and hospitals.
This is another example of the kinds of tasks that will increasingly be performed by machines. You can argue that certain patient populations (e.g. young children, patients with mental health issues, etc.) will always need a human being performing the technique for safety reasons. And this is likely to be true for a long time. But those situations account for only a minority of the venipunctures performed; the bulk of this work will soon be done by robots that are cheaper, faster and cause less damage than human clinical staff.
Nurses are unlikely to be replaced any time soon because their work includes so much more than drawing blood. But the tasks we expect them to perform are certainly going to change. How are health professions educators in the undergraduate curriculum working to get ahead of those changes?
The problem of overdiagnosis is often mentioned in relation to two common cancers: breast and prostate. In both cases, enhanced technology is already detecting small abnormalities that may never result in harm during a lifetime. Machine-learning may trump human interpretation but merely making a diagnosis does not bring us closer to the truth about the impact of the finding. In other words, will the cancer ever cause symptoms, and crucially, will the patient die from it? How will the knowledge of cancer alter the rest of a person’s days?
I’m not a fan of the way the author starts the article; it feels a bit contrived and unlikely to reflect the patient experience of healthcare around the world. But I think that the point the author is making is that there are certain aspects of healthcare that AI and robots aren’t going to replace (she could probably have just said that?).
So yes, AI is already “better” than human beings in several different areas (e.g. diagnostics, interpretation of findings, image recognition, etc.). But no, that doesn’t mean that healthcare professionals will be replaced. Because being a doctor/physio/nurse means that we are more than interpreters of results; we are human beings in communion with other human beings. While the features of AI in clinical practice don’t mean that we’re going to see the replacement of professions, they do mean that we might see the replacement of tasks within professions.
Unfortunately, the article doesn’t get to this point and simply concludes that, because all the tasks of a doctor can’t be replaced, the question is moot. But it’s the wrong question to ask. We’re not going to replace health care providers with smart humanoid robots but we’ll definitely see changes in professional training and in clinical practice.
The implications of this are that, in order to remain relevant, professions in the near future will need to demonstrate an ability to take advantage of the benefits of advanced technologies while adapting and expanding the relationship-centred aspects of health care.
I’ve started working on what will eventually become a curated library of resources that I’m using for my research on the impact of artificial intelligence and machine learning on clinical practice. At the moment it’s just a public repository of the articles, podcasts, blog posts that I’ve read or listened to and then saved in Zotero. You can subscribe to the feed so that when new items are added you’ll get a notification in whatever feedreader you use. Click on the image below to see the library.
For now, it’s a public – but closed – group that has a library, meaning that anyone can see the list of library items but no-one can join the group, which means no-one else can add, edit or delete resources (for now). This is just because I’m still figuring out how it works and don’t want the additional admin of actually managing anything. I may open this up in future if it looks like anyone else is interested in joining and contributing. I’m also not sharing any of the original articles and books but will look into the implications of sharing these publicly, considering that most of them – being academic articles – are subject to copyright restrictions from the publishers.
The library/repository isn’t meant to be exhaustive but rather a small selection of articles and other resources that I think might be useful for clinicians, educators, students and researchers with an interest in AI in healthcare. At the moment it’s just a dump of some of the resources I’ve used and include notes and links associated with the resources. I’m going to revisit the items in the list and try to add more useful summaries and descriptions of everything with the idea that this could be something like a curated, annotated reading/watching/listening list for anyone with an interest in the topic.
“…implementation should be seen as an agile, iterative, and lightweight process of obtaining training data, developing algorithms, and crafting these into tools and workflows.”
Coiera, E. (2019). The Last Mile: Where Artificial Intelligence Meets Reality. Journal of Medical Internet Research, 21(11), e16323. https://doi.org/10.2196/16323
A short article (2 pages of text) describing the challenges of building AI systems without understanding that technological solutions are only relevant when they solve real world problems that we care about, and when they are built within the systems that they will ultimately be used in.
Note: I found it hard not to just rewrite the whole paper because I really like the way Coiera writes and find that his economy with words makes it hard to cut things out i.e. I think that it’s all important text. I tried to address this by making my notes without looking at the original article, and then going back over the notes and rewriting them.
Technology shapes us as we shape it. Humans and machines form a sociotechnical system.
The application of technology should be shaped by the problem at hand and not the technology itself. But we see the opposite of this today, with companies building technologies that are then used to solve “problems” that no-one thought were problems. Most social media fits this description.
Technological innovations may create new classes of solution but it’s only in the real world that we see what problems are worth addressing and what solutions are most appropriate. Just because a technology is presented as a solution it’s up to us to make choices about whether the solution is the best solution, or whether the problem is important.
There are two broad research agendas for AI:
The technical aspects of building machine intelligence.
The application of machine intelligence to real world problems that we care about.
In our drive to accelerate progress in the first area, we may lose sight of the second. For example, even though image recognition is developing very quickly the use of image recognition systems has had little clinical impact to date. In some cases, it may even make clinical outcomes worse. For example when the overdiagnosis of a condition causes an increase in management (and associated costs and exposure to harm), even though treatment options remain unchanged.
There are three stages of development with data-driven technologies like AI-based systems:
Data are acquired, labelled and cleaned.
Building and testing technical performance in controlled environments.
Algorithms are applied in real world contexts.
It’s only really in the last stage where it’s clear that “AI does nothing on its own” i.e. all technology is embedded in the sociotechnical systems mentioned earlier and are intricately connected to people and the choices that people make. This makes sociotechnical systems messy and complex, and therefore immune to the “solutions” touted by tecnology companies.
Some of the “last mile” challenges of AI implementation include:
Measurement: We use standard metrics of AI performance to show improvement. But these metrics are often only useful in controlled experiments and are divorced from the practical realities of implementation in the clinical context.
Generalisation and calibration: AI systems are trained on historical data and so future performance of the algorithm is dependent on how well the historical data matches the new context.
Local context: The complexity of interacting variables within local contexts mean that any system will have to be fine-tuned to the organisation in which it is embedded. Organisations also change over time, meaning that the AI will need to be adjusted as well.
The author also provides possible solutions to these challenges.
Software development has moved from a linear process to an iterative model where systems are developed in situ through interaction with users in the real world. Google, Facebook, Amazon, etc. do this all the time by exposing small subsets of users to changes in the platform, and then measuring differences in engagement using metrics that the platforms care about (time spent on Facebook, or number of clicks on ads).
In healthcare we’ll need to build systems in which AI-based technologies are implemented, not as completed solutions, but with the understanding that they will need refinement and adaptation through iterative use in complex, local contexts. Ideally, they will be built within the systems they are going to be used in.
Note: I’m the Editor at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.
It’s a very important ritual. If you look at rituals, in general, they are all about crossing a threshold. We marry, we have baptisms, we have funerals—all with ceremony to indicate the crossing of a threshold. If we step back and look at the physical exam, it has all the trappings of ritual.
Almost immediately we get to the notion that there’s very little value in terms of data collection that happens during the physical exam. It’s clear that the validity and reliability of a lot of what we do during the “laying on of hands” is questionable. So far so good. But then the hosts start talking about the value of physical touch as part of a ritual that includes some kind of threshold crossing for the clinician and patient. This is where it starts getting a bit weird.
On the one hand, I agree that there’s a lot of ritual that frames the patient-clinician interaction and that this may even be something that patients look for. On the other hand I don’t think that this is something to be celebrated and which I believe will fall away as AI becomes more tightly integrated into healthcare. You don’t need to conduct a physical exam to signal to the patient that you’re paying attention; you can just pay attention.
I’m also uncomfortable with some of the language used in the episode that’s reminiscent of priests, ceremony, and the mystical, and I don’t know why but it makes me think of a profession that’s in decline. There’s a parallel here if you think of religion that’s under pressure worldwide as the spaces in which God has room to move gets smaller and smaller. Not that medicine is going to go away entirely but the parts of it that try and hold onto the remnants of a past that are no longer relevant are going to become increasingly disconnected to 21st century clinical practice.
If you think that the value of the human being in the patient-clinician encounter is that we need people to enact a ritual, then surely you’ve lost the plot. There are many reasons for why this perspective is problematic but two big ones come to mind:
Rituals are used to create a sense of mystery as part of a ceremony related to threshold crossing. While I think that this has value in some parts of society (e.g. becoming an adult, getting married, etc.) I don’t think it has a place in scientific endeavour.
You don’t need to spend 7 years studying medicine, and then another 5 years specialising, in order to simulate some kind of threshold crossing with a patient.
Having said all that, I think the episode is still worth listening to, even if only to listen Topol and Verghese come up with dubious arguments for why it’s so important for the doctor to remain central to the clinical encounter.
We know very little about how physiotherapy clinicians think about the impact of AI-based systems on clinical practice, or how these systems will influence human relationships and professional practice. As a result, we cannot prepare for the changes that are coming to clinical practice and physiotherapy education. The aim of this study is to explore how physiotherapists currently think about the potential impact of artificial intelligence on their own clinical practice.
Earlier this year I registered a project that aims to develop a better understanding of how physiotherapists think about the impact of artificial intelligence in clinical practice. Now I’m ready to move forward with the first phase of the study, which is an online survey of physiotherapy clinicians’ perceptions of AI in professional practice. The second phase will be a series of follow up interviews with survey participants who’d like to discuss the topic in more depth.
I’d like to get as many participants as possible (obviously) so would really appreciate it if you could share the link to the survey with anyone you think might be interested. There are 12 open-ended questions split into 3 sections, with a fourth section for demographic information. Participants don’t need a detailed understanding of artificial intelligence and (I think) I’ve provided enough context to make the questionnaire simple for anyone to complete in about 20 minutes.
I recently received ethics clearance to begin an explorative study looking at how physiotherapists think about the introduction of machine learning into clinical practice. The study will use an international survey and a series of interviews to gather data on clinicians’ perspectives on questions like the following:
What aspects of clinical practice are vulnerable to automation?
How do we think about trust when it comes to AI-based clinical decision support?
What is the role of the clinician in guiding the development of AI in clinical practice?
I’m busy finalising the questionnaire and hope to have the survey up and running in a couple of weeks, with more focused interviews following. If these kinds of questions interest you and you’d like to have a say in answering them, keep an eye out for a call to respond.
Here is the study abstract (contact me if you’d like more detailed information):
Background: Artificial intelligence (AI) is a branch of computer science that aims to embed intelligent behaviour into software in order to achieve certain objectives. Increasingly, AI is being integrated into a variety of healthcare and clinical applications and there is significant research and funding being directed at improving the performance of these systems in clinical practice. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and clinical decision-making.
Aim: In order to ensure that we successfully integrate machine intelligence with the essential human characteristics of empathic, caring and creative clinical practice, we need to first understand how clinicians perceive the introduction of AI into professional practice.
Methods: This study will make use of an explorative design to gather qualitative data via an online survey and a series of interviews with physiotherapy clinicians from around the world. The survey questionnaire will be self-administered and piloted for validity and ambiguity, and the interview guide informed by the study aim. The population for both survey and interviews will consist of physiotherapy clinicians from around the world. This is an explorative study with a convenient sample, therefore no a priori sample size will be calculated.
Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, its users, and the way in which both interact in the (socio-technical) health system.
The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. 
This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.
Coiera offers the following suggestions for how clinical education should probably be adapted:
Include a solid foundation in the statistical and psychological science of clinical reasoning.
Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
Clinicians must be active participants in the development of new models of care that will become possible with AI.
We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.
The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.
Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.
 The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.