Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

Comment: Artificial intelligence turns brain activity into speech

People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation.

Servick, K. (2019). Artificial intelligence turns brain activity into speech. Science.

To be clear, this research doesn’t describe the artificial recreation of imagined speech i.e. the internal speech that each of us hears as part of the personal monologue of our own subjective experiences. Rather, it maps the electrical activity in the areas of the brain that are responsible for the articulation of speech as the participant reads or listens to sounds being played back to them. Nonetheless, it’s an important step for patients who have suffered damage to those areas of the brain responsible for speaking.

I also couldn’t help but get excited about the following; when electrical signals from the brain are converted into digital information (as they would have to be here, in order to do the analysis and speech synthesis) then why not also transmit that digital information over wifi? If it’s possible for me to understand you “thinking about saying words”, instead of using your muscles of articulation to actually say them, how long will it be before you can send those words to me over a wireless connection?

Giving algorithms a sense of uncertainty could make them more ethical

The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure and hand the dilemma back to the humans.”

Hao, K. (2019). Giving algorithms a sense of uncertainty could make them more ethical. MIT Technology Review.

I think about clinical reasoning like this; it’s what we call the kind of probabilistic thinking where we take a bunch of – sometimes contradictory – data and try to make a decision that can have varying levels of confidence. For example, “If A, then probably D. But if A and B, then unlikely to be D. If C, then definitely not D”. Algorithms (and novice clinicians) are quite poor at this kind of reasoning, which is why they’ve traditionally not been used for clinical decision-making and ethical reasoning (and why novice clinicians tend not to handle clinical uncertainty very well). But if it turns out that machine learning algorithms are able to manage conditions of uncertainty and provide a range of options that humans can act on, given a wide variety of preferences and contexts, it may be that machines will be one step closer to doing our reasoning for us.

Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?


18 March 2019 Update: The Digital Doctor: Will surgeons lose their skills in the age of automation? AI Med.

Questions for Artificial Intelligence in Health Care

Artificial intelligence (AI) is gaining high visibility in the realm of health care innovation. Broadly defined, AI is a field of computer science that aims to mimic human intelligence with computer systems. This mimicry is accomplished through iterative, complex pattern matching, generally at a speed and scale that exceed human capability. Proponents suggest, often enthusiastically, that AI will revolutionize health care for patients and populations. However, key questions must be answered to translate its promise into action.

Maddox, TM, Rumsfeld, JS, Payne, PR. (2018). Questions for Artificial Intelligence in Health Care. JAMA. Published online December 10, 2018. doi:10.1001/jama.2018.1893.

The questions and follow-up responses presented in the article are useful, highlighting the nuance that is often ignored in mainstream pieces that tend to focus on the extreme potential of the technology (i.e. what this might one day be like) rather than the more subtle implications that we need to consider today. The following text is verbatim from the article:

  1. What are the right tasks for AI in healthcare? AI is best used when the primary task is identifying clinically useful patterns in large, high-dimensional data sets. AI is most likely to succeed when used with high-quality data sources on which to “learn” and classify data in relation to outcomes. However, most clinical data, whether from electronic health records (EHRs) or medical billing claims, remain ill-defined and largely insufficient for effective exploitation by AI techniques.
  2. What are the right data for AI? AI is most likely to succeed when used with high-quality data sources on which to “learn” and classify data in relation to outcomes. However, most clinical data, whether from electronic health records (EHRs) or medical billing claims, remain ill-defined and largely insufficient for effective exploitation by AI techniques.
  3. What is the right evidence standard for AI? Innovations in medications and medical devices are required to undergo extensive evaluation, often including randomized clinical trials and postmarketing surveillance, to validate clinical effectiveness and safety. If AI is to directly influence and improve clinical care delivery, then an analogous evidence standard is needed to demonstrate improved outcomes and a lack of unintended consequences.
  4. What are the right approaches for integrating AI into clinical care? Even after the correct tasks, data, and evidence for AI are addressed, realization of its potential will not occur without effective integration into clinical care. To do so requires that clinicians develop a facility with interpreting and integrating AI-supported insights in their clinical care.

Split learning for health: Distributed deep learning without sharing raw patient data

Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple task

Source: [1812.00564] Split learning for health: Distributed deep learning without sharing raw patient data

The paper describes how algorithm design (including training) can be shared across different organisations without each having access to each other’s resources.

This has important implications for the development of AI-based health applications, in that hospitals and other service providers need not share raw patient data with companies like Google/DeepMind. Health organisations could do the basic algorithm design in-house with the smaller, local data sets and then send the algorithm to organisations that have the massive data sets necessary for refining the algorithm, all without exposing the initial data and protecting patient privacy.

E.J. Chichilnisky | Restoring Sight to the Blind

Source: After on podcast with Rob Reid: Episode 39: E.J. Chichilnisky | Restoring Sight to the Blind.

This was mind-blowing.

The conversation starts with a basic overview of how the eye works, which is fascinating in itself, but then they start talking about how they’ve figured out how to insert an external (digital) process into the interface between the eye and brain, and that’s when things get crazy.

It’s not always easy to see the implications of converting physical processes into software but this is one of those conversations that really makes it simple to see. When we use software to mediate the information that the brain receives, we’re able to manipulate that information in many different ways. For example, with this system in place, you could see wavelengths of light that are invisible to the unaided eye. Imagine being able to see in the infrared or ultraviolet spectrum. But it gets even crazier.

It turns out we have cells in the interface between the brain and eye that are capable of processing different kinds of visual information (for example, reading text and evaluating movement). When both types of cell receives information meant for the other at once, we find it really hard to process both simultaneously. But, if software could divert the different kinds of information directly to the cells responsible for processing it, we could do things like read text while driving. The brain wouldn’t be confused because the information isn’t coming via the eyes at all and so the different streams are processed as two separate channels.

Like I said, mind-blowing stuff.

Additional reading

The fate of medicine in the time of AI

Source: Coiera, E. (2018). The fate of medicine in the time of AI.

The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. [1]

This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.

Coiera offers the following suggestions for how clinical education should probably be adapted:

  • Include a solid foundation in the statistical and psychological science of clinical reasoning.
  • Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
  • Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
  • Clinicians must be active participants in the development of new models of care that will become possible with AI.

We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.

The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.

Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.


[1] The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Medical data: who owns it and what can be done to it?

…most states in the US do not have law to confer specific ownership of medical data to patients, while others put the rights on hospitals and physicians. Of all, only New Hampshire allows patients to legally own their medical records.

Source: Medical data: who owns it and what can be done to it?

A short article that raises some interesting questions. My understanding is that the data belongs to the patient and the media on which the data is stored belongs to the hospital. For example, I own the data generated about my body but the paper folder or computer hard drive belongs to the hospital. That means I can ask the hospital to photocopy my medical folder and give me the copy (or to email me an exported XML data file from whatever EHR system they use) but I can’t take the folder home when I’m discharged.

Things are going to get interesting when AI-based systems are being trained en masse using historical medical records where patients did not give consent for their data to be used for algorithmic training. I believe that the GDPR goes some way towards addressing this issue by stating that, “healthcare providers do not have to seek prior permission from patients to use their data, as long as they observe the professional secrecy act to not identify patients at the individual level”.