Research project exploring clinicians’ perspectives of the introduction of ML into clinical practice

I recently received ethics clearance to begin an explorative study looking at how physiotherapists think about the introduction of machine learning into clinical practice. The study will use an international survey and a series of interviews to gather data on clinicians’ perspectives on questions like the following:

  • What aspects of clinical practice are vulnerable to automation?
  • How do we think about trust when it comes to AI-based clinical decision support?
  • What is the role of the clinician in guiding the development of AI in clinical practice?

I’m busy finalising the questionnaire and hope to have the survey up and running in a couple of weeks, with more focused interviews following. If these kinds of questions interest you and you’d like to have a say in answering them, keep an eye out for a call to respond.

Here is the study abstract (contact me if you’d like more detailed information):

Background: Artificial intelligence (AI) is a branch of computer science that aims to embed intelligent behaviour into software in order to achieve certain objectives. Increasingly, AI is being integrated into a variety of healthcare and clinical applications and there is significant research and funding being directed at improving the performance of these systems in clinical practice. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and clinical decision-making.

Aim: In order to ensure that we successfully integrate machine intelligence with the essential human characteristics of empathic, caring and creative clinical practice, we need to first understand how clinicians perceive the introduction of AI into professional practice.

Methods: This study will make use of an explorative design to gather qualitative data via an online survey and a series of interviews with physiotherapy clinicians from around the world. The survey questionnaire will be self-administered and piloted for validity and ambiguity, and the interview guide informed by the study aim. The population for both survey and interviews will consist of physiotherapy clinicians from around the world. This is an explorative study with a convenient sample, therefore no a priori sample size will be calculated.

Article published – An introduction to machine learning for clinicians

It’s a nice coincidence that my article on machine learning for clinicians has been published at around the same time that my poster on a similar topic was presented at WCPT. I’m quite happy with this paper and think it offers a useful overview of the topic of machine learning that is specific to clinical practice and which will help clinicians understand what is at times a confusing topic. The mainstream media (and, to be honest, many academics) conflate a wide variety of terms when they talk about artificial intelligence, and this paper goes some way towards providing some background information for anyone interested in how this will affect clinical work. You can download the preprint here.


Abstract

The technology at the heart of the most innovative progress in health care artificial intelligence (AI) is in a sub-domain called machine learning (ML), which describes the use of software algorithms to identify patterns in very large data sets. ML has driven much of the progress of health care AI over the past five years, demonstrating impressive results in clinical decision support, patient monitoring and coaching, surgical assistance, patient care, and systems management. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyze and interpret the complex interactions between data, patients, and clinical decision-makers. However, as this technology becomes more powerful it also becomes less transparent, and algorithmic decisions are therefore increasingly opaque. This is problematic because computers will increasingly be asked for answers to clinical questions that have no single right answer, are open-ended, subjective, and value-laden. As ML continues to make important contributions in a variety of clinical domains, clinicians will need to have a deeper understanding of the design, implementation, and evaluation of ML to ensure that current health care is not overly influenced by the agenda of technology entrepreneurs and venture capitalists. The aim of this article is to provide a non-technical introduction to the concept of ML in the context of health care, the challenges that arise, and the resulting implications for clinicians.

WCPT poster: Introduction to machine learning in healthcare

It’s a bit content-heavy and not as graphic-y as I’d like but c’est la vie.

I’m quite proud of what I think is a novel innovation in poster design; the addition of the tl;dr column before the findings. In other words, if you only have 30 seconds to look at the poster then that’s the bit you want to focus on. Related to this, I’ve also moved the Background, Methods and Conclusion sections to the bottom and made them smaller so as to emphasise the Findings, which are placed first.

My full-size poster on machine learning in healthcare for the 2019 WCPT conference in Geneva.

Reference list (download this list as a Word document)

  1. Yang, C. C., & Veltri, P. (2015). Intelligent healthcare informatics in big data era. Artificial Intelligence in Medicine, 65(2), 75–77. https://doi.org/10.1016/j.artmed.2015.08.002
  2. Qayyum, A., Anwar, S. M., Awais, M., & Majid, M. (2017). Medical image retrieval using deep convolutional neural network. Neurocomputing, 266, 8–20. https://doi.org/10.1016/j.neucom.2017.05.025
  3. Li, Z., Zhang, X., Müller, H., & Zhang, S. (2018). Large-scale retrieval for medical image analytics: A comprehensive review. Medical Image Analysis, 43, 66–84. https://doi.org/10.1016/j.media.2017.09.007
  4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  5. Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., & Zheng, Y. (2016). Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science, 90, 200–205. https://doi.org/10.1016/j.procs.2016.07.014
  6. Ramzan, M., Shafique, A., Kashif, M., & Umer, M. (2017). Gait Identification using Neural Network. International Journal of Advanced Computer Science and Applications, 8(9). https://doi.org/10.14569/IJACSA.2017.080909
  7. Kidziński, Ł., Delp, S., & Schwartz, M. (2019). Automatic real-time gait event detection in children using deep neural networks. PLOS ONE, 14(1), e0211466. https://doi.org/10.1371/journal.pone.0211466
  8. Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., & Schöllhorn, W. I. (2019). Explaining the Unique Nature of Individual Gait Patterns with Deep Learning. Scientific Reports, 9(1), 2391. https://doi.org/10.1038/s41598-019-38748-8
  9. Cai, T., Giannopoulos, A. A., Yu, S., Kelil, T., Ripley, B., Kumamaru, K. K., … Mitsouras, D. (2016). Natural Language Processing Technologies in Radiology Research and Clinical Applications. RadioGraphics, 36(1), 176–191. https://doi.org/10.1148/rg.2016150080
  10. Jackson, R. G., Patel, R., Jayatilleke, N., Kolliakou, A., Ball, M., Gorrell, G., … Stewart, R. (2017). Natural language processing to extract symptoms of severe mental illness from clinical text: The Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project. BMJ Open, 7(1), e012012. https://doi.org/10.1136/bmjopen-2016-012012
  11. Kreimeyer, K., Foster, M., Pandey, A., Arya, N., Halford, G., Jones, S. F., … Botsis, T. (2017). Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. Journal of Biomedical Informatics, 73, 14–29. https://doi.org/10.1016/j.jbi.2017.07.012
  12. Montenegro, J. L. Z., Da Costa, C. A., & Righi, R. da R. (2019). Survey of Conversational Agents in Health. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2019.03.054
  13. Carrell, D. S., Schoen, R. E., Leffler, D. A., Morris, M., Rose, S., Baer, A., … Mehrotra, A. (2017). Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings. Journal of the American Medical Informatics Association, 24(5), 986–991. https://doi.org/10.1093/jamia/ocx039
  14. Oña, E. D., Cano-de la Cuerda, R., Sánchez-Herrera, P., Balaguer, C., & Jardón, A. (2018). A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb. Journal of Healthcare Engineering, 2018, 1–19. https://doi.org/10.1155/2018/9758939
  15. Krebs, H. I., & Volpe, B. T. (2015). Robotics: A Rehabilitation Modality. Current Physical Medicine and Rehabilitation Reports, 3(4), 243–247. https://doi.org/10.1007/s40141-015-0101-6
  16. Leng, M., Liu, P., Zhang, P., Hu, M., Zhou, H., Li, G., … Chen, L. (2019). Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research, 271, 516–525. https://doi.org/10.1016/j.psychres.2018.12.032
  17. Jennifer Piatt, P., Shinichi Nagata, M. S., Selma Šabanović, P., Wan-Ling Cheng, M. S., Casey Bennett, P., Hee Rin Lee, M. S., & David Hakken, P. (2017). Companionship with a robot? Therapists’ perspectives on socially assistive robots as therapeutic interventions in community mental health for older adults. American Journal of Recreation Therapy, 15(4), 29–39. https://doi.org/10.5055/ajrt.2016.0117
  18. Troccaz, J., Dagnino, G., & Yang, G.-Z. (2019). Frontiers of Medical Robotics: From Concept to Systems to Clinical Translation. Annual Review of Biomedical Engineering, 21(1). https://doi.org/10.1146/annurev-bioeng-060418-052502
  19. Riek, L. D. (2017). Healthcare Robotics. ArXiv:1704.03931 [Cs]. Retrieved from http://arxiv.org/abs/1704.03931
  20. Kappassov, Z., Corrales, J.-A., & Perdereau, V. (2015). Tactile sensing in dexterous robot hands — Review. Robotics and Autonomous Systems, 74, 195–220. https://doi.org/10.1016/j.robot.2015.07.015
  21. Choi, C., Schwarting, W., DelPreto, J., & Rus, D. (2018). Learning Object Grasping for Soft Robot Hands. IEEE Robotics and Automation Letters, 3(3), 2370–2377. https://doi.org/10.1109/LRA.2018.2810544
  22. Shortliffe, E., & Sepulveda, M. (2018). Clinical Decision Support in the Era of Artificial Intelligence. Journal of the American Medical Association.
  23. Attema, T., Mancini, E., Spini, G., Abspoel, M., de Gier, J., Fehr, S., … Sloot, P. M. A. (n.d.). A new approach to privacy-preserving clinical decision support systems. 15.
  24. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., … Suh, K. S. (2015). Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics, 5(1). https://doi.org/10.1186/s13336-015-0019-3
  25. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11), 1544. https://doi.org/10.1001/jamainternmed.2018.3763
  26. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. ArXiv:1804.02969 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1804.02969
  27. Weng, S. F., Reps, J., Kai, J., Garibaldi, J. M., & Qureshi, N. (2017). Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS ONE, 12(4), e0174944. https://doi.org/10.1371/journal.pone.0174944
  28. Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical Intervention Prediction and Understanding using Deep Networks. ArXiv:1705.08498 [Cs]. Retrieved from http://arxiv.org/abs/1705.08498
  29. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689
  30. Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA, 319(1), 19. https://doi.org/10.1001/jama.2017.19198

Comment: Why AI is a threat to democracy—and what we can do to stop it

The developmental track of AI is a problem, and every one of us has a stake. You, me, my dad, my next-door neighbor, the guy at the Starbucks that I’m walking past right now. So what should everyday people do? Be more aware of who’s using your data and how. Take a few minutes to read work written by smart people and spend a couple minutes to figure out what it is we’re really talking about. Before you sign your life away and start sharing photos of your children, do that in an informed manner. If you’re okay with what it implies and what it could mean later on, fine, but at least have that knowledge first.

Hao, K. (2019). Why AI is a threat to democracy—and what we can do to stop it. MIT Technology Review.

I agree that we all have a stake in the outcomes of the introduction of AI-based systems, which means that we all have a responsibility in helping to shape it. While most of us can’t be involved in writing code for these systems, we can all be more intentional about what data we provide to companies working on artificial intelligence and how they use that data (on a related note, have you ever wondered just how much data is being collected by Google, for example?). Here are some of the choices I’ve made about the software that I use most frequently:

  • Mobile operating system: I run LineageOS on my phone and tablet, which is based on Android but is modified so that the data on the phone stays on the phone i.e. is not reported back to Google.
  • Desktop/laptop operating system: I’ve used various Ubuntu Linux distributions since 2004, not only because Linux really is a better OS (faster, cheaper, more secure, etc.) but because open-source software is more trustworthy.
  • Browser: I switched from Chrome to Firefox with the release of Quantum, which saw Firefox catch up in performance metrics. With privacy as the default design consideration, it was an easy move to make. You should just switch to Firefox.
  • Email: I’ve looked around – a lot – and can’t find an email provider to replace Gmail. I use various front-ends to manage my email on different devices but that doesn’t get me away from the fact that Google still processes all of my emails on the back-end. I could pay for my email service provider – and there do seem to be good options – but then I’d be paying for email.
  • Search engine: I moved from Google Search to DuckDuckGo about a year ago and can’t say that I miss Google Search all that much. Every now and again I do find that I have to go to Google, especially for images.
  • Photo storage: Again, I’ve looked around for alternatives but the combination of the free service, convenience (automatic upload of photos taken on my phone), unlimited storage (for lower res copies) and the image recognition features built into Google Photos make this very difficult to move away from.
  • To do list: I’ve used Todoist and Any.do on and off for years but eventually moved to Todo.txt because I wanted to have more control over the things that I use on a daily basis. I like the fact that my work is stored in a text file and will be backwards compatible forever.
  • Note taking: I use a combination of Simplenote and Qownnotes for my notes. Simplenote is the equivalent of sticky notes (short-term notes that I make on my phone and delete after acting on them), and Qownnotes is for long-form note-taking and writing that stores notes as text files. Again, I want to control my data and these apps give me that control along with all of the features that I care about.
  • Maps: Google Maps is without equal and is so far ahead of anyone else that it’s very difficult to move away from. However, I’ve also used Here We Go on and off and it’s not bad for simple directions.

From the list above you can see that I pay attention to how my data is stored, shared and used, and that privacy is important to me. I’m not unsophisticated in my use of technology and I still can’t get away from Google for email, photos, and maps, arguably the most important data gathering services that the company provides. Maybe there’s something that I’m missing out but companies like Google, Facebook, Amazon and Microsoft are so entangled in everything that we care about, I really don’t see a way to avoid using their products. The suggestion that users should be more careful about what data they share, and who they share it with, is a useful thought experiment but the practical reality is that it would very difficult indeed to avoid these companies altogether.

Google isn’t only problem. See what Facebook knows about you.

Comment: Facebook says it’s going to make it harder to access anti-vax misinformation

Facebook won’t go as far as banning pages that spread anti-vaccine messages…[but] would make them harder to find. It will do this by reducing their ranking and not including them as recommendations or predictions in search.

Firth. N. (2019). Facebook says it’s going to make it harder to access anti-vax misinformation. MIT Technology Review.

Of course this is a good thing, right? Facebook – already one of the most important ways that people get their information – is going to make it more difficult for readers to find information that opposes vaccination. With the recent outbreak of measles in the United States we need to do more to ensure that searches for “vaccination” don’t also surface results encouraging parents not to vaccinate their children.

But what happens when Facebook (or Google, or Microsoft, or Amazon) start making broader decisions about what information is credible, accurate or fake? That would actually be great if we could trust their algorithms. But trust requires that we’re allowed to see the algorithm (and also that we can understand it, which in most cases, we can’t). In this case, it’s a public health issue and most reasonable people would see that the decision is the “right” one. But when companies tweak their algorithms to privilege certain types of information over other types of information, then I think we need to be concerned. Today we agree with Facebook’s decision but how confident can we be that we’ll still agree tomorrow?

Also, vaccines are awesome.

Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?


18 March 2019 Update: The Digital Doctor: Will surgeons lose their skills in the age of automation? AI Med.

First compute no harm

Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, its users, and the way in which both interact in the (socio-technical) health system.

Source: Enrico Coiera et al. (2017). First compute no harm. BMJ Opinion.

The article goes on to list some of the guiding principles for the development of AI in healthcare, including the following:

  • AI must be designed and built to meet safety standards that ensure it is fit for purpose and operates as intended.
  • AI must be designed for the needs of those who will work with it, and fit their workflows.
  • Humans must have the right to challenge an AI’s decision if they believe it to be in error.
  • Humans should not direct AIs to perform beyond the bounds of their design or delegated authority.
  • Humans should recognize that their own performance is altered when working with AI.
  • If humans are responsible for an outcome, they should be obliged to remain vigilant, even after they have delegated tasks to an AI.

The principles listed above are only a very short summary. If you’re interested in the topic of ethical decision making in clinical practice, you should read the whole thing.

MIT researchers show how to detect and address AI bias without loss in accuracy

The key…is often to get more data from underrepresented groups. For example…an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced by 40 percent.

Source: MIT researchers show how to detect and address AI bias without loss in accuracy | VentureBeat

What many people don’t understand about algorithmic bias is that it’s corrected quite easily, relative to the challenge of correcting bias in human beings. If machine learning outputs are biased, we can change the algorithm, and we can change the datasets. What’s the plan for changing human bias?

The AI Threat to Democracy

With the advent of strong reinforcement learning…, goal-oriented strategic AI is now very much a reality. The difference is one of categories, not increments. While a supervised learning system relies upon the metrics fed to it by humans to come up with meaningful predictions and lacks all capacity for goal-oriented strategic thinking, reinforcement learning systems possess an open-ended utility function and can strategize continuously on how to fulfil it.

Source: Krumins, A. (2018). The AI Threat to Democracy.

“…an open-ended utility function” means that the algorithm is given a goal state and then left to it’s own devices to figure out how best to optimise towards that goal. It does this by trying a solution and seeing if it got closer to the goal. Every step that moves the algorithm closer to the goal state is rewarded (typically by a token that the algorithm is conditioned to value). In other words, an RL algorithm takes actions to maximise reward. Consequently, it represents a fundamentally different approach to problem-solving than supervised learning, which requires human intervention to tell the algorithm whether or not it’s conclusions are valid.

In the video below, a Deepmind researcher uses AlphaGo and AlphaGo Zero to illustrate the difference between supervised and reinforcement learning.

This is both exciting and a bit unsettling. Exciting because it means that an AI-based system could iteratively solve for problems that we don’t yet know how to solve ourselves. This has implications for the really big, complex challenges we face, like climate change. On the other hand, we should probably start thinking very carefully about the goal states that we ask RL algorithms to optimise towards, especially since we’re not specifying up front what path the system should take to reach the goal, and we have no idea if the algorithm will take human values into consideration when making choices about achieving its goal. We may be at a point where the paperclip maximiser is no longer just a weird thought experiment.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrum, N. (2003). Ethical Issues in Advanced Artificial Intelligence.

We may end up choosing goal states without specifying in advance what paths the algorithm should not take because they would be unaligned with human values. Like the problem that Mickey faces in the Sorcerer’s Apprentice, the unintended consequences of our choices with reinforcement learning may be truly significant.