Research project exploring clinicians’ perspectives of the introduction of ML into clinical practice

I recently received ethics clearance to begin an explorative study looking at how physiotherapists think about the introduction of machine learning into clinical practice. The study will use an international survey and a series of interviews to gather data on clinicians’ perspectives on questions like the following:

  • What aspects of clinical practice are vulnerable to automation?
  • How do we think about trust when it comes to AI-based clinical decision support?
  • What is the role of the clinician in guiding the development of AI in clinical practice?

I’m busy finalising the questionnaire and hope to have the survey up and running in a couple of weeks, with more focused interviews following. If these kinds of questions interest you and you’d like to have a say in answering them, keep an eye out for a call to respond.

Here is the study abstract (contact me if you’d like more detailed information):

Background: Artificial intelligence (AI) is a branch of computer science that aims to embed intelligent behaviour into software in order to achieve certain objectives. Increasingly, AI is being integrated into a variety of healthcare and clinical applications and there is significant research and funding being directed at improving the performance of these systems in clinical practice. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and clinical decision-making.

Aim: In order to ensure that we successfully integrate machine intelligence with the essential human characteristics of empathic, caring and creative clinical practice, we need to first understand how clinicians perceive the introduction of AI into professional practice.

Methods: This study will make use of an explorative design to gather qualitative data via an online survey and a series of interviews with physiotherapy clinicians from around the world. The survey questionnaire will be self-administered and piloted for validity and ambiguity, and the interview guide informed by the study aim. The population for both survey and interviews will consist of physiotherapy clinicians from around the world. This is an explorative study with a convenient sample, therefore no a priori sample size will be calculated.

Article published – An introduction to machine learning for clinicians

It’s a nice coincidence that my article on machine learning for clinicians has been published at around the same time that my poster on a similar topic was presented at WCPT. I’m quite happy with this paper and think it offers a useful overview of the topic of machine learning that is specific to clinical practice and which will help clinicians understand what is at times a confusing topic. The mainstream media (and, to be honest, many academics) conflate a wide variety of terms when they talk about artificial intelligence, and this paper goes some way towards providing some background information for anyone interested in how this will affect clinical work. You can download the preprint here.


Abstract

The technology at the heart of the most innovative progress in health care artificial intelligence (AI) is in a sub-domain called machine learning (ML), which describes the use of software algorithms to identify patterns in very large data sets. ML has driven much of the progress of health care AI over the past five years, demonstrating impressive results in clinical decision support, patient monitoring and coaching, surgical assistance, patient care, and systems management. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyze and interpret the complex interactions between data, patients, and clinical decision-makers. However, as this technology becomes more powerful it also becomes less transparent, and algorithmic decisions are therefore increasingly opaque. This is problematic because computers will increasingly be asked for answers to clinical questions that have no single right answer, are open-ended, subjective, and value-laden. As ML continues to make important contributions in a variety of clinical domains, clinicians will need to have a deeper understanding of the design, implementation, and evaluation of ML to ensure that current health care is not overly influenced by the agenda of technology entrepreneurs and venture capitalists. The aim of this article is to provide a non-technical introduction to the concept of ML in the context of health care, the challenges that arise, and the resulting implications for clinicians.

WCPT poster: Introduction to machine learning in healthcare

It’s a bit content-heavy and not as graphic-y as I’d like but c’est la vie.

I’m quite proud of what I think is a novel innovation in poster design; the addition of the tl;dr column before the findings. In other words, if you only have 30 seconds to look at the poster then that’s the bit you want to focus on. Related to this, I’ve also moved the Background, Methods and Conclusion sections to the bottom and made them smaller so as to emphasise the Findings, which are placed first.

My full-size poster on machine learning in healthcare for the 2019 WCPT conference in Geneva.

Reference list (download this list as a Word document)

  1. Yang, C. C., & Veltri, P. (2015). Intelligent healthcare informatics in big data era. Artificial Intelligence in Medicine, 65(2), 75–77. https://doi.org/10.1016/j.artmed.2015.08.002
  2. Qayyum, A., Anwar, S. M., Awais, M., & Majid, M. (2017). Medical image retrieval using deep convolutional neural network. Neurocomputing, 266, 8–20. https://doi.org/10.1016/j.neucom.2017.05.025
  3. Li, Z., Zhang, X., Müller, H., & Zhang, S. (2018). Large-scale retrieval for medical image analytics: A comprehensive review. Medical Image Analysis, 43, 66–84. https://doi.org/10.1016/j.media.2017.09.007
  4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  5. Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., & Zheng, Y. (2016). Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science, 90, 200–205. https://doi.org/10.1016/j.procs.2016.07.014
  6. Ramzan, M., Shafique, A., Kashif, M., & Umer, M. (2017). Gait Identification using Neural Network. International Journal of Advanced Computer Science and Applications, 8(9). https://doi.org/10.14569/IJACSA.2017.080909
  7. Kidziński, Ł., Delp, S., & Schwartz, M. (2019). Automatic real-time gait event detection in children using deep neural networks. PLOS ONE, 14(1), e0211466. https://doi.org/10.1371/journal.pone.0211466
  8. Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., & Schöllhorn, W. I. (2019). Explaining the Unique Nature of Individual Gait Patterns with Deep Learning. Scientific Reports, 9(1), 2391. https://doi.org/10.1038/s41598-019-38748-8
  9. Cai, T., Giannopoulos, A. A., Yu, S., Kelil, T., Ripley, B., Kumamaru, K. K., … Mitsouras, D. (2016). Natural Language Processing Technologies in Radiology Research and Clinical Applications. RadioGraphics, 36(1), 176–191. https://doi.org/10.1148/rg.2016150080
  10. Jackson, R. G., Patel, R., Jayatilleke, N., Kolliakou, A., Ball, M., Gorrell, G., … Stewart, R. (2017). Natural language processing to extract symptoms of severe mental illness from clinical text: The Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project. BMJ Open, 7(1), e012012. https://doi.org/10.1136/bmjopen-2016-012012
  11. Kreimeyer, K., Foster, M., Pandey, A., Arya, N., Halford, G., Jones, S. F., … Botsis, T. (2017). Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. Journal of Biomedical Informatics, 73, 14–29. https://doi.org/10.1016/j.jbi.2017.07.012
  12. Montenegro, J. L. Z., Da Costa, C. A., & Righi, R. da R. (2019). Survey of Conversational Agents in Health. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2019.03.054
  13. Carrell, D. S., Schoen, R. E., Leffler, D. A., Morris, M., Rose, S., Baer, A., … Mehrotra, A. (2017). Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings. Journal of the American Medical Informatics Association, 24(5), 986–991. https://doi.org/10.1093/jamia/ocx039
  14. Oña, E. D., Cano-de la Cuerda, R., Sánchez-Herrera, P., Balaguer, C., & Jardón, A. (2018). A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb. Journal of Healthcare Engineering, 2018, 1–19. https://doi.org/10.1155/2018/9758939
  15. Krebs, H. I., & Volpe, B. T. (2015). Robotics: A Rehabilitation Modality. Current Physical Medicine and Rehabilitation Reports, 3(4), 243–247. https://doi.org/10.1007/s40141-015-0101-6
  16. Leng, M., Liu, P., Zhang, P., Hu, M., Zhou, H., Li, G., … Chen, L. (2019). Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research, 271, 516–525. https://doi.org/10.1016/j.psychres.2018.12.032
  17. Jennifer Piatt, P., Shinichi Nagata, M. S., Selma Šabanović, P., Wan-Ling Cheng, M. S., Casey Bennett, P., Hee Rin Lee, M. S., & David Hakken, P. (2017). Companionship with a robot? Therapists’ perspectives on socially assistive robots as therapeutic interventions in community mental health for older adults. American Journal of Recreation Therapy, 15(4), 29–39. https://doi.org/10.5055/ajrt.2016.0117
  18. Troccaz, J., Dagnino, G., & Yang, G.-Z. (2019). Frontiers of Medical Robotics: From Concept to Systems to Clinical Translation. Annual Review of Biomedical Engineering, 21(1). https://doi.org/10.1146/annurev-bioeng-060418-052502
  19. Riek, L. D. (2017). Healthcare Robotics. ArXiv:1704.03931 [Cs]. Retrieved from http://arxiv.org/abs/1704.03931
  20. Kappassov, Z., Corrales, J.-A., & Perdereau, V. (2015). Tactile sensing in dexterous robot hands — Review. Robotics and Autonomous Systems, 74, 195–220. https://doi.org/10.1016/j.robot.2015.07.015
  21. Choi, C., Schwarting, W., DelPreto, J., & Rus, D. (2018). Learning Object Grasping for Soft Robot Hands. IEEE Robotics and Automation Letters, 3(3), 2370–2377. https://doi.org/10.1109/LRA.2018.2810544
  22. Shortliffe, E., & Sepulveda, M. (2018). Clinical Decision Support in the Era of Artificial Intelligence. Journal of the American Medical Association.
  23. Attema, T., Mancini, E., Spini, G., Abspoel, M., de Gier, J., Fehr, S., … Sloot, P. M. A. (n.d.). A new approach to privacy-preserving clinical decision support systems. 15.
  24. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., … Suh, K. S. (2015). Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics, 5(1). https://doi.org/10.1186/s13336-015-0019-3
  25. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11), 1544. https://doi.org/10.1001/jamainternmed.2018.3763
  26. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. ArXiv:1804.02969 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1804.02969
  27. Weng, S. F., Reps, J., Kai, J., Garibaldi, J. M., & Qureshi, N. (2017). Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS ONE, 12(4), e0174944. https://doi.org/10.1371/journal.pone.0174944
  28. Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical Intervention Prediction and Understanding using Deep Networks. ArXiv:1705.08498 [Cs]. Retrieved from http://arxiv.org/abs/1705.08498
  29. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689
  30. Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA, 319(1), 19. https://doi.org/10.1001/jama.2017.19198

Health professionals’ role in the banning of lethal autonomous weapons

This is a great episode from the Future of Life Institute, on the topic of banning lethal autonomous weapons. You may wonder, what on earth do lethal autonomous weapons have to do with health professionals? I wondered the same thing until I was reminded of the role that physios play in the rehabilitation of landmine victims. Landmines are less sophisticated than the next generation of lethal autonomous weapons, which means, in part, that they’re less able to distinguish between targets.

Weaponised drones, for example, will not only identify and engage targets based on age, gender, location, dress code, etc. but will also be able to reprioritise objectives independent of any human operator. In addition, unlike building a landmine, which (probably) requires some specialised training, weaponised drones will be produced en masse at low cost, fitted with commoditised hardware, will be programmable, and can be deployed at distance from the target. These are tools of mass destruction for the consumer market, enabling a few to create immense harm to many.

The video below gives an example of how 100s of drones can be coordinated by a single person. If these drones were fitted with explosives instead of flashing lights, you start to get a sense of how much damage they could do in a crowded space and how difficult it would be to stop them.

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.

The American Medical Association has published a position statement on the role of artificial intelligence in augmenting the work of medical professionals but no professional organisation has yet to take a stance on banning autonomous weapons. It seems odd that we recognise the significance of AI for enhancing healthcare but not apparently, it’s potential for increasing human suffering. The medical and health professional community should not only advocate for the use of AI to improve health but also to ensure it is not used for autonomous decision-making in armed conflict.

More reading and resources at https://futureoflife.org/2019/04/02/fli-podcast-why-ban-lethal-autonomous-weapons/.

Comment: Nvidia AI Turns Doodles Into Realistic Landscapes

Nvidia has shown that AI can use a simple representation of a landscape to render a photorealistic vista that doesn’t exist anywhere in the real world… It has just three tools: a paint bucket, a pen, and a paintbrush. After selecting your tool, you click on a material type at the bottom of the screen. Material types include things like tree, river, hill, mountain, rock, and sky. The organization of materials in the sketch tells the software what each part of the doodle is supposed to represent, and it generates a realistic version of it in real time.

Whitwam, R. (2019). Nvidia AI Turns Doodles Into Realistic Landscapes. Extreme Tech.

You may be tempted think of this as substitution, where the algorithm looks at the shape you draw, notes the “material” it represents (e.g. a mountain) and then matches it to an image of that thing that already exists. But that’s not what’s happening here. The AI is creating a completely new version of what you’ve specified, based on what it knows that thing to look like.

So when you say that this shape is a mountain, it has a general concept of “mountain”, which it uses to create something new. If it were a simple substitution, the algorithm would need you to draw a shape that corresponds to an existing feature of the world. I suppose you could argue that this isn’t real creativity but I think you’d be hard-pressed to say that it’s not moving in that direction. The problem (IMO) with every argument saying that AI is not creative, is that these things only ever get better. It may not conform to the definition of creativity that you’re using today, but tomorrow it will.

Link: How AI Will Rewire Us

Radical innovations have previously transformed the way humans live together. The advent of cities…meant a less nomadic existence and a higher population density. More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.

As consequential as these innovations were, however, they did not change the fundamental aspects of human behaviour: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching.

But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.

Christakis, N. (2019). How AI Will Rewire Us. The Atlantic.

The author provides a series of experimental outcomes showing how, depending on the nature of the interacting AI, human beings can be made to respond differently to teammates and collaborators. For example, having a bot make minor errors and then apologise can nudge people towards being more compassionate with each other. This should give us pause as we consider how we want to design the systems that we’ll soon be working with.

For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship.


See also: Comment: In competition, people get discouraged by competent robots.

10 recommendations for the ethical use of AI

In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.

  1. Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  2. Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  3. Privacy: Users should be able to easily opt out of data collection.
  4. Diversity: A.I. technology should be developed by inherently diverse teams.
  5. Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  6. Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  7. Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  8. Collective governance: Companies should work together to self-regulate the industry.
  9. Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  10. “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

The list of recommendations seems reasonable enough on the surface, although I wonder how practical they are given the business models of the companies most active in developing AI-based systems. As long as Google, Microsoft, Facebook, etc. are generating the bulk of their revenue from advertising that’s powered by the data we give them, they have little incentive to be transparent, to disclose, to be regulated, etc. If we opt our data out of the AI training pool, the AI is more susceptible to bias and less useful/accurate, so having more data is usually better for algorithm development. And having internal processes to build trust? That seems odd.

However, even though it’s easy to find issues with all of these recommendations it doesn’t mean that they’re not useful. The more of these kinds of conversations we have, the more likely it is that we’ll figure out a way to have AI that positively influences society.

Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

Comment: Why AI is a threat to democracy—and what we can do to stop it

The developmental track of AI is a problem, and every one of us has a stake. You, me, my dad, my next-door neighbor, the guy at the Starbucks that I’m walking past right now. So what should everyday people do? Be more aware of who’s using your data and how. Take a few minutes to read work written by smart people and spend a couple minutes to figure out what it is we’re really talking about. Before you sign your life away and start sharing photos of your children, do that in an informed manner. If you’re okay with what it implies and what it could mean later on, fine, but at least have that knowledge first.

Hao, K. (2019). Why AI is a threat to democracy—and what we can do to stop it. MIT Technology Review.

I agree that we all have a stake in the outcomes of the introduction of AI-based systems, which means that we all have a responsibility in helping to shape it. While most of us can’t be involved in writing code for these systems, we can all be more intentional about what data we provide to companies working on artificial intelligence and how they use that data (on a related note, have you ever wondered just how much data is being collected by Google, for example?). Here are some of the choices I’ve made about the software that I use most frequently:

  • Mobile operating system: I run LineageOS on my phone and tablet, which is based on Android but is modified so that the data on the phone stays on the phone i.e. is not reported back to Google.
  • Desktop/laptop operating system: I’ve used various Ubuntu Linux distributions since 2004, not only because Linux really is a better OS (faster, cheaper, more secure, etc.) but because open-source software is more trustworthy.
  • Browser: I switched from Chrome to Firefox with the release of Quantum, which saw Firefox catch up in performance metrics. With privacy as the default design consideration, it was an easy move to make. You should just switch to Firefox.
  • Email: I’ve looked around – a lot – and can’t find an email provider to replace Gmail. I use various front-ends to manage my email on different devices but that doesn’t get me away from the fact that Google still processes all of my emails on the back-end. I could pay for my email service provider – and there do seem to be good options – but then I’d be paying for email.
  • Search engine: I moved from Google Search to DuckDuckGo about a year ago and can’t say that I miss Google Search all that much. Every now and again I do find that I have to go to Google, especially for images.
  • Photo storage: Again, I’ve looked around for alternatives but the combination of the free service, convenience (automatic upload of photos taken on my phone), unlimited storage (for lower res copies) and the image recognition features built into Google Photos make this very difficult to move away from.
  • To do list: I’ve used Todoist and Any.do on and off for years but eventually moved to Todo.txt because I wanted to have more control over the things that I use on a daily basis. I like the fact that my work is stored in a text file and will be backwards compatible forever.
  • Note taking: I use a combination of Simplenote and Qownnotes for my notes. Simplenote is the equivalent of sticky notes (short-term notes that I make on my phone and delete after acting on them), and Qownnotes is for long-form note-taking and writing that stores notes as text files. Again, I want to control my data and these apps give me that control along with all of the features that I care about.
  • Maps: Google Maps is without equal and is so far ahead of anyone else that it’s very difficult to move away from. However, I’ve also used Here We Go on and off and it’s not bad for simple directions.

From the list above you can see that I pay attention to how my data is stored, shared and used, and that privacy is important to me. I’m not unsophisticated in my use of technology and I still can’t get away from Google for email, photos, and maps, arguably the most important data gathering services that the company provides. Maybe there’s something that I’m missing out but companies like Google, Facebook, Amazon and Microsoft are so entangled in everything that we care about, I really don’t see a way to avoid using their products. The suggestion that users should be more careful about what data they share, and who they share it with, is a useful thought experiment but the practical reality is that it would very difficult indeed to avoid these companies altogether.

Google isn’t only problem. See what Facebook knows about you.

Comment: Facebook says it’s going to make it harder to access anti-vax misinformation

Facebook won’t go as far as banning pages that spread anti-vaccine messages…[but] would make them harder to find. It will do this by reducing their ranking and not including them as recommendations or predictions in search.

Firth. N. (2019). Facebook says it’s going to make it harder to access anti-vax misinformation. MIT Technology Review.

Of course this is a good thing, right? Facebook – already one of the most important ways that people get their information – is going to make it more difficult for readers to find information that opposes vaccination. With the recent outbreak of measles in the United States we need to do more to ensure that searches for “vaccination” don’t also surface results encouraging parents not to vaccinate their children.

But what happens when Facebook (or Google, or Microsoft, or Amazon) start making broader decisions about what information is credible, accurate or fake? That would actually be great if we could trust their algorithms. But trust requires that we’re allowed to see the algorithm (and also that we can understand it, which in most cases, we can’t). In this case, it’s a public health issue and most reasonable people would see that the decision is the “right” one. But when companies tweak their algorithms to privilege certain types of information over other types of information, then I think we need to be concerned. Today we agree with Facebook’s decision but how confident can we be that we’ll still agree tomorrow?

Also, vaccines are awesome.