Comment: Lessons learned building natural language processing systems in health care

Many people make the mistake of assuming that clinical notes are written in English. That happens because that’s how doctors will answer if you ask them what language they use.

Talby, D. (2019). Lessons learned building natural language processing systems in health care. O’Reilly.

This is an interesting post making the point that medical language – especially when written in clinical notes – is not the same as other, more typical, human languages. This is important to recognise in the context of training natural language processing (NLP) models in the healthcare context because medical languages have different vocabularies, grammatical structure, and semantics. Trying to get an NLP system to “understand”* medical language is a fundamentally different problem to understanding other languages.

The lessons from this article are slightly technical (although not difficult to follow) and do a good job highlighting why NLP in health systems is seeing slower progress than the NLP running on your phone. You may think that, since Google Translate does quite well translating between English and Spanish, for example, it should also be able to translate between English and “Radiography”. This article explains why that problem is not only harder than “normal” translation, but also different.

* Note: I’m saying “understand” while recognising that current NLP systems understand nothing. They’re statistically modelling the likelihood that certain words follow certain other words and have no concept of what those words mean.

Comment: Training a single AI model can emit as much carbon as five cars in their lifetimes

The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research. “This trend toward training huge models on tons of data is not feasible for academics…because we don’t have the computational resources. So there’s an issue of equitable access between researchers in academia versus researchers in industry.”

Hao, K. (2019). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review.

The article focuses on the scale of the financial and environmental cost of training natural language processing (NLP) models, comparing the carbon emissions of various AI models to those of a car throughout its lifetime. To be honest, this isn’t something I’ve given much thought to but to see it visually really drives the point home.

As much as this is a cause for concern, I’m less worried about this in the long term for the following reason. As the author’s in the article stake, the code and models for AI and NLP are currently really inefficient; they don’t need to be neat and compute is relatively easy to come by (if you’re Google and Facebook). I think that the models will get more efficient, as is evident by the fact that new computer vision algorithms can get to the same outcomes with datasets that are orders of magnitude smaller than was previously possible.

For me though, the quote that I’ve pulled from the article to start this post is more compelling. If the costs of modeling NLP are so high, it seems likely that companies like Google, Facebook and Amazon will be the only ones who can do the high end research necessary to drive the field forward. Academics at universities have an incentive to create more efficient models, which they publish and which then allow companies to take advantage of those new models while at the same time, having access to much more computational resources.

From where I’m standing this makes it seem that private companies will always be at the forefront of AI development, which makes me less optimistic than if it were driven by academics. Maybe I’m just being naive (and probably also biased) but this seems less than ideal.

You can find the full paper here on arxiv.

WCPT poster: Introduction to machine learning in healthcare

It’s a bit content-heavy and not as graphic-y as I’d like but c’est la vie.

I’m quite proud of what I think is a novel innovation in poster design; the addition of the tl;dr column before the findings. In other words, if you only have 30 seconds to look at the poster then that’s the bit you want to focus on. Related to this, I’ve also moved the Background, Methods and Conclusion sections to the bottom and made them smaller so as to emphasise the Findings, which are placed first.

Here is the tl;dr version. Or, my poster in 8 tweets:

  • Aim: The aim of the study was to identify the ways in which machine learning algorithms are being used across the health sector that may impact physiotherapy practice.
  • Image recognition: Millions of patient scans can be analysed in seconds, and diagnoses made by non-specialists via mobile phones, with lower rates of error than humans are capable of.
  • Video analysis: Constant video surveillance of patients will alert providers of those at risk of falling, as well as make early diagnoses of movement-related disorders.
  • Natural language processing: Unstructured, freeform clinical notes will be converted into structured data that can be analysed, leading to increased accuracy in data capture and diagnosis.
  • Robotics: Autonomous robots will assist with physical tasks like patient transportation and possibly even take over manual therapy tasks from clinicians.
  • Expert systems: Knowing things about conditions will become less important than knowing when to trust outputs from clinical decision support systems.
  • Prediction: Clinicians should learn how to integrate the predictions of machine learning algorithms with human values in order to make better clinical decisions in partnership with AI-based systems.
  • Conclusion: The challenge we face is to bring together computers and humans in ways that enhance human well-being, augment human ability and expand human capacity.
My full-size poster on machine learning in healthcare for the 2019 WCPT conference in Geneva.

Reference list (download this list as a Word document)

  1. Yang, C. C., & Veltri, P. (2015). Intelligent healthcare informatics in big data era. Artificial Intelligence in Medicine, 65(2), 75–77. https://doi.org/10.1016/j.artmed.2015.08.002
  2. Qayyum, A., Anwar, S. M., Awais, M., & Majid, M. (2017). Medical image retrieval using deep convolutional neural network. Neurocomputing, 266, 8–20. https://doi.org/10.1016/j.neucom.2017.05.025
  3. Li, Z., Zhang, X., Müller, H., & Zhang, S. (2018). Large-scale retrieval for medical image analytics: A comprehensive review. Medical Image Analysis, 43, 66–84. https://doi.org/10.1016/j.media.2017.09.007
  4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  5. Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., & Zheng, Y. (2016). Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science, 90, 200–205. https://doi.org/10.1016/j.procs.2016.07.014
  6. Ramzan, M., Shafique, A., Kashif, M., & Umer, M. (2017). Gait Identification using Neural Network. International Journal of Advanced Computer Science and Applications, 8(9). https://doi.org/10.14569/IJACSA.2017.080909
  7. Kidziński, Ł., Delp, S., & Schwartz, M. (2019). Automatic real-time gait event detection in children using deep neural networks. PLOS ONE, 14(1), e0211466. https://doi.org/10.1371/journal.pone.0211466
  8. Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., & Schöllhorn, W. I. (2019). Explaining the Unique Nature of Individual Gait Patterns with Deep Learning. Scientific Reports, 9(1), 2391. https://doi.org/10.1038/s41598-019-38748-8
  9. Cai, T., Giannopoulos, A. A., Yu, S., Kelil, T., Ripley, B., Kumamaru, K. K., … Mitsouras, D. (2016). Natural Language Processing Technologies in Radiology Research and Clinical Applications. RadioGraphics, 36(1), 176–191. https://doi.org/10.1148/rg.2016150080
  10. Jackson, R. G., Patel, R., Jayatilleke, N., Kolliakou, A., Ball, M., Gorrell, G., … Stewart, R. (2017). Natural language processing to extract symptoms of severe mental illness from clinical text: The Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project. BMJ Open, 7(1), e012012. https://doi.org/10.1136/bmjopen-2016-012012
  11. Kreimeyer, K., Foster, M., Pandey, A., Arya, N., Halford, G., Jones, S. F., … Botsis, T. (2017). Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. Journal of Biomedical Informatics, 73, 14–29. https://doi.org/10.1016/j.jbi.2017.07.012
  12. Montenegro, J. L. Z., Da Costa, C. A., & Righi, R. da R. (2019). Survey of Conversational Agents in Health. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2019.03.054
  13. Carrell, D. S., Schoen, R. E., Leffler, D. A., Morris, M., Rose, S., Baer, A., … Mehrotra, A. (2017). Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings. Journal of the American Medical Informatics Association, 24(5), 986–991. https://doi.org/10.1093/jamia/ocx039
  14. Oña, E. D., Cano-de la Cuerda, R., Sánchez-Herrera, P., Balaguer, C., & Jardón, A. (2018). A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb. Journal of Healthcare Engineering, 2018, 1–19. https://doi.org/10.1155/2018/9758939
  15. Krebs, H. I., & Volpe, B. T. (2015). Robotics: A Rehabilitation Modality. Current Physical Medicine and Rehabilitation Reports, 3(4), 243–247. https://doi.org/10.1007/s40141-015-0101-6
  16. Leng, M., Liu, P., Zhang, P., Hu, M., Zhou, H., Li, G., … Chen, L. (2019). Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research, 271, 516–525. https://doi.org/10.1016/j.psychres.2018.12.032
  17. Jennifer Piatt, P., Shinichi Nagata, M. S., Selma Šabanović, P., Wan-Ling Cheng, M. S., Casey Bennett, P., Hee Rin Lee, M. S., & David Hakken, P. (2017). Companionship with a robot? Therapists’ perspectives on socially assistive robots as therapeutic interventions in community mental health for older adults. American Journal of Recreation Therapy, 15(4), 29–39. https://doi.org/10.5055/ajrt.2016.0117
  18. Troccaz, J., Dagnino, G., & Yang, G.-Z. (2019). Frontiers of Medical Robotics: From Concept to Systems to Clinical Translation. Annual Review of Biomedical Engineering, 21(1). https://doi.org/10.1146/annurev-bioeng-060418-052502
  19. Riek, L. D. (2017). Healthcare Robotics. ArXiv:1704.03931 [Cs]. Retrieved from http://arxiv.org/abs/1704.03931
  20. Kappassov, Z., Corrales, J.-A., & Perdereau, V. (2015). Tactile sensing in dexterous robot hands — Review. Robotics and Autonomous Systems, 74, 195–220. https://doi.org/10.1016/j.robot.2015.07.015
  21. Choi, C., Schwarting, W., DelPreto, J., & Rus, D. (2018). Learning Object Grasping for Soft Robot Hands. IEEE Robotics and Automation Letters, 3(3), 2370–2377. https://doi.org/10.1109/LRA.2018.2810544
  22. Shortliffe, E., & Sepulveda, M. (2018). Clinical Decision Support in the Era of Artificial Intelligence. Journal of the American Medical Association.
  23. Attema, T., Mancini, E., Spini, G., Abspoel, M., de Gier, J., Fehr, S., … Sloot, P. M. A. (n.d.). A new approach to privacy-preserving clinical decision support systems. 15.
  24. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., … Suh, K. S. (2015). Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics, 5(1). https://doi.org/10.1186/s13336-015-0019-3
  25. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11), 1544. https://doi.org/10.1001/jamainternmed.2018.3763
  26. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. ArXiv:1804.02969 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1804.02969
  27. Weng, S. F., Reps, J., Kai, J., Garibaldi, J. M., & Qureshi, N. (2017). Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS ONE, 12(4), e0174944. https://doi.org/10.1371/journal.pone.0174944
  28. Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical Intervention Prediction and Understanding using Deep Networks. ArXiv:1705.08498 [Cs]. Retrieved from http://arxiv.org/abs/1705.08498
  29. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689
  30. Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA, 319(1), 19. https://doi.org/10.1001/jama.2017.19198

Mozilla’s Common Voice project

Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

Source: Branson, M. (2018). We’re intentionally designing open experiences, here’s why.

One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.

I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.

Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!

The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?

Doctors are burning out because electronic medical records are broken

For all the promise that digital records hold for making the system more efficient—and the very real benefit these records have already brought in areas like preventing medication errors—EMRs aren’t working on the whole. They’re time consuming, prioritize billing codes over patient care, and too often force physicians to focus on digital recordkeeping rather than the patient in front of them.

Source: Minor, L. (2017). Doctors are burning out because electronic medical records are broken.

I’ve read some physicians can spend up to 60% of their day capturing patient information in the EHR. And this isn’t because there’s a lot of information. It’s often down to confusing user interfaces, misguided approaches to security (e.g. having to enter multiple different passwords and a lack of off-site access), and poor design that results in physicians capturing more information than necessary.

There’s interest in using natural language processing to analyse recorded conversation between clinicians and colleagues/patients and while the technology is still unsuitable for mainstream use, it seems likely that it will continue improving until it is.

Also, consider reading:

AI Can Now Identify Racist Code Words on Social Media

“We essentially gathered hateful tweets and used language processing to find the other terms that were associated with such messages… We learned these terms and used them as the bridge to new terms—as long as we have those words, we have a link to anything they can come up with.” This defeats attempts to conceal racist slurs using codes by targeting the language that makes up the cultural matrix from which the hate emerges, instead of just seeking out keywords. Even if the specific slurs used by racists change in order to escape automated comment moderation, the other terms they use to identify themselves and their communities likely won’t.

Source: Pearson, J. (2017). AI Can Now Identify Racist Code Words on Social Media.

There are a few things I thought are worth noting:

  • The developers of this algorithm used Tweets to identify the hateful language, which says something about the general quality of discourse on Twitter.
  • The algorithm isn’t simply substituting one set of keywords with another; it identifies the context of the sentence in order to determine if the sentiment is hateful. The specific words almost don’t matter. This is a significant step in natural language processing.
  • The post appeared in 2017 so it’s a year old and I haven’t looked to see what (if any) progress has been made since then.