WCPT poster: Introduction to machine learning in healthcare

It’s a bit content-heavy and not as graphic-y as I’d like but c’est la vie.

I’m quite proud of what I think is a novel innovation in poster design; the addition of the tl;dr column before the findings. In other words, if you only have 30 seconds to look at the poster then that’s the bit you want to focus on. Related to this, I’ve also moved the Background, Methods and Conclusion sections to the bottom and made them smaller so as to emphasise the Findings, which are placed first.

Here is the tl;dr version. Or, my poster in 8 tweets:

  • Aim: The aim of the study was to identify the ways in which machine learning algorithms are being used across the health sector that may impact physiotherapy practice.
  • Image recognition: Millions of patient scans can be analysed in seconds, and diagnoses made by non-specialists via mobile phones, with lower rates of error than humans are capable of.
  • Video analysis: Constant video surveillance of patients will alert providers of those at risk of falling, as well as make early diagnoses of movement-related disorders.
  • Natural language processing: Unstructured, freeform clinical notes will be converted into structured data that can be analysed, leading to increased accuracy in data capture and diagnosis.
  • Robotics: Autonomous robots will assist with physical tasks like patient transportation and possibly even take over manual therapy tasks from clinicians.
  • Expert systems: Knowing things about conditions will become less important than knowing when to trust outputs from clinical decision support systems.
  • Prediction: Clinicians should learn how to integrate the predictions of machine learning algorithms with human values in order to make better clinical decisions in partnership with AI-based systems.
  • Conclusion: The challenge we face is to bring together computers and humans in ways that enhance human well-being, augment human ability and expand human capacity.
My full-size poster on machine learning in healthcare for the 2019 WCPT conference in Geneva.

Reference list (download this list as a Word document)

  1. Yang, C. C., & Veltri, P. (2015). Intelligent healthcare informatics in big data era. Artificial Intelligence in Medicine, 65(2), 75–77. https://doi.org/10.1016/j.artmed.2015.08.002
  2. Qayyum, A., Anwar, S. M., Awais, M., & Majid, M. (2017). Medical image retrieval using deep convolutional neural network. Neurocomputing, 266, 8–20. https://doi.org/10.1016/j.neucom.2017.05.025
  3. Li, Z., Zhang, X., Müller, H., & Zhang, S. (2018). Large-scale retrieval for medical image analytics: A comprehensive review. Medical Image Analysis, 43, 66–84. https://doi.org/10.1016/j.media.2017.09.007
  4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  5. Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., & Zheng, Y. (2016). Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science, 90, 200–205. https://doi.org/10.1016/j.procs.2016.07.014
  6. Ramzan, M., Shafique, A., Kashif, M., & Umer, M. (2017). Gait Identification using Neural Network. International Journal of Advanced Computer Science and Applications, 8(9). https://doi.org/10.14569/IJACSA.2017.080909
  7. Kidziński, Ł., Delp, S., & Schwartz, M. (2019). Automatic real-time gait event detection in children using deep neural networks. PLOS ONE, 14(1), e0211466. https://doi.org/10.1371/journal.pone.0211466
  8. Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., & Schöllhorn, W. I. (2019). Explaining the Unique Nature of Individual Gait Patterns with Deep Learning. Scientific Reports, 9(1), 2391. https://doi.org/10.1038/s41598-019-38748-8
  9. Cai, T., Giannopoulos, A. A., Yu, S., Kelil, T., Ripley, B., Kumamaru, K. K., … Mitsouras, D. (2016). Natural Language Processing Technologies in Radiology Research and Clinical Applications. RadioGraphics, 36(1), 176–191. https://doi.org/10.1148/rg.2016150080
  10. Jackson, R. G., Patel, R., Jayatilleke, N., Kolliakou, A., Ball, M., Gorrell, G., … Stewart, R. (2017). Natural language processing to extract symptoms of severe mental illness from clinical text: The Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project. BMJ Open, 7(1), e012012. https://doi.org/10.1136/bmjopen-2016-012012
  11. Kreimeyer, K., Foster, M., Pandey, A., Arya, N., Halford, G., Jones, S. F., … Botsis, T. (2017). Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. Journal of Biomedical Informatics, 73, 14–29. https://doi.org/10.1016/j.jbi.2017.07.012
  12. Montenegro, J. L. Z., Da Costa, C. A., & Righi, R. da R. (2019). Survey of Conversational Agents in Health. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2019.03.054
  13. Carrell, D. S., Schoen, R. E., Leffler, D. A., Morris, M., Rose, S., Baer, A., … Mehrotra, A. (2017). Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings. Journal of the American Medical Informatics Association, 24(5), 986–991. https://doi.org/10.1093/jamia/ocx039
  14. Oña, E. D., Cano-de la Cuerda, R., Sánchez-Herrera, P., Balaguer, C., & Jardón, A. (2018). A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb. Journal of Healthcare Engineering, 2018, 1–19. https://doi.org/10.1155/2018/9758939
  15. Krebs, H. I., & Volpe, B. T. (2015). Robotics: A Rehabilitation Modality. Current Physical Medicine and Rehabilitation Reports, 3(4), 243–247. https://doi.org/10.1007/s40141-015-0101-6
  16. Leng, M., Liu, P., Zhang, P., Hu, M., Zhou, H., Li, G., … Chen, L. (2019). Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research, 271, 516–525. https://doi.org/10.1016/j.psychres.2018.12.032
  17. Jennifer Piatt, P., Shinichi Nagata, M. S., Selma Šabanović, P., Wan-Ling Cheng, M. S., Casey Bennett, P., Hee Rin Lee, M. S., & David Hakken, P. (2017). Companionship with a robot? Therapists’ perspectives on socially assistive robots as therapeutic interventions in community mental health for older adults. American Journal of Recreation Therapy, 15(4), 29–39. https://doi.org/10.5055/ajrt.2016.0117
  18. Troccaz, J., Dagnino, G., & Yang, G.-Z. (2019). Frontiers of Medical Robotics: From Concept to Systems to Clinical Translation. Annual Review of Biomedical Engineering, 21(1). https://doi.org/10.1146/annurev-bioeng-060418-052502
  19. Riek, L. D. (2017). Healthcare Robotics. ArXiv:1704.03931 [Cs]. Retrieved from http://arxiv.org/abs/1704.03931
  20. Kappassov, Z., Corrales, J.-A., & Perdereau, V. (2015). Tactile sensing in dexterous robot hands — Review. Robotics and Autonomous Systems, 74, 195–220. https://doi.org/10.1016/j.robot.2015.07.015
  21. Choi, C., Schwarting, W., DelPreto, J., & Rus, D. (2018). Learning Object Grasping for Soft Robot Hands. IEEE Robotics and Automation Letters, 3(3), 2370–2377. https://doi.org/10.1109/LRA.2018.2810544
  22. Shortliffe, E., & Sepulveda, M. (2018). Clinical Decision Support in the Era of Artificial Intelligence. Journal of the American Medical Association.
  23. Attema, T., Mancini, E., Spini, G., Abspoel, M., de Gier, J., Fehr, S., … Sloot, P. M. A. (n.d.). A new approach to privacy-preserving clinical decision support systems. 15.
  24. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., … Suh, K. S. (2015). Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics, 5(1). https://doi.org/10.1186/s13336-015-0019-3
  25. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11), 1544. https://doi.org/10.1001/jamainternmed.2018.3763
  26. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. ArXiv:1804.02969 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1804.02969
  27. Weng, S. F., Reps, J., Kai, J., Garibaldi, J. M., & Qureshi, N. (2017). Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS ONE, 12(4), e0174944. https://doi.org/10.1371/journal.pone.0174944
  28. Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical Intervention Prediction and Understanding using Deep Networks. ArXiv:1705.08498 [Cs]. Retrieved from http://arxiv.org/abs/1705.08498
  29. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689
  30. Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA, 319(1), 19. https://doi.org/10.1001/jama.2017.19198

Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

If Artificial Intelligence Only Benefits a Select Few, Everyone Loses

…nations that have begun to prepare for and explore AI will reap the benefits of an economic boom. The report also demonstrates how anyone who hasn’t prepared, especially in developing nations, will be left behind… In the developing world, in the developing countries or countries with transition economies, there is much less discussion of AI, both from the benefit or the risk side.

The growing divide between nations that are prepared for widespread automation and those that aren’t, between companies that can cut costs by replacing workers and the newly unemployed people themselves, puts us on a collision course for conflict and backlash against further developing and deploying AI technology

Source: Robitzski, D. (2018). If Artificial Intelligence Only Benefits a Select Few, Everyone Loses.

A short post that’s drawn mainly from the 64 page McKinsey report (Notes From the Frontier: Modeling the Impact of AI on the World Economy). This is something that I’ve tried to highlight when I’ve talked about this technology to skeptical colleagues; in many cases, AI in the workplace will arrive as a software update and will, therefore, be available in developing, as well as developed countries. This isn’t like buying a new MRI machine where the cost is in the hardware and ongoing support. The existing MRI machine will get an update over the internet and from now on it’ll include analysis of the image and automated reporting. And now the cost of running your radiology department at full staff capacity is starting to look more expensive than it needs to be. This says nothing of the other important tasks that radiologists perform; the fact is that a big component of their daily work includes classifying images, and for human beings, that ship has sailed. While in more developed economies it may be easier to relocate expertise within the same institution, I don’t think we’re going to have that luxury the developing world. If we’re not thinking about these problems today, we’re going to be awfully unprepared when that software update arrives.

Eagle-eyed machine learning algorithm outdoes human experts — ScienceDaily

“Human detection and identification is error-prone, inconsistent and inefficient. Perhaps most importantly, it’s not scalable,” says Morgan. “Newer imaging technologies are outstripping human capabilities to analyze the data we can produce.”

Source: Eagle-eyed machine learning algorithm outdoes human experts — ScienceDaily

The point here is that data is being generated faster than we can analyse and interpret it. Big data is not a storage problem, it’s an analysis problem. Yes, we’ve had large sets of data before (think, libraries) but no-one expected a human being to read through, and make sense of, all of it. Now that digital health-related data is being generated by institutions (e.g. CT and MRI scans, EHRs), wearables (e.g. Fitbits, smart contact lenses), embeddables (e.g. wifi enabled pacemakers, insulin pumps) and ingestibles (e.g. bluetooth-enabled smart pills), it’s clear that no single service provider will have the cognitive capacity to analyse and interpret the data flowing from patients at that scale.

As more and more of the data we use in healthcare is digitised, we’ll need algorithmic assistance to filter out and highlight what is important for our specific context (i.e. what does a physio need to know about, rather than what the nurse needs). There will obviously be a role for health professionals in designing and evaluating those algorithms but will we be forward-thinking enough to clearly describe those roles and to prepare future clinicians for them?

Medical images: the only photos not in the cloud – AI Med

We know that AI could prove highly beneficial for radiologists by cutting down on read times and improving accuracy. In addition, AI could be a strong resource for mining large data sets for both individual patient care and global insights. But first, we must access the images.

Today’s traditional hardware, CDs, and PACS (picture archiving communications system) lock data deep inside them and prevent interoperability.

Source: Medical images: the only photos not in the cloud – AI Med

I’d never considered this before but it’s obviously true, and for good reason. Patient anonymity and privary are good reasons to lock down medical images. But it also means that we won’t be able to run the kinds of machine learning algorithms on that data, nor will we be able to compare data from different populations where the medical images sit on different servers in different countries and are regulated by different laws and policies.

If we want to see the kinds of progress being made in other areas of image classification, we may need to reconsider our current policies around sharing patient data. Of course we’ll need consent from patients, as well as a means of ensuring data transfer across systems. This second point alone would be worth pursuing anyway, as it may lead to a set of (open) standards for interoperability between different EHR systems.

As with all things related to machine learning, having access to high fidelity, well-labelled data is key. If we don’t make patient data accessible in some format or another we may find it hard to use AI-based systems in healthcare. This obviously assumes that we want AI-based systems in healthcare in the first place.