Categories
AI clinical

Comment: Sensing behavior.

Wearable technology like smartwatches and the related digital devices that now populate our homes and workplaces are starting to change the face of medicine, as they produce data that help us diagnose health issues, and capabilities to help treat them. On this episode, we look at the rise of personal health informatics and computational approaches to behavioral science, with a special focus on caring for children with severe autism.

Cohen, D. & Goodwin, N. (2019). Sensing Behavior. What’s New podcast.

If I have someone wearing that biosensor and we have 3 minutes of their previous data, 8 out of 10 times that we would predict that they’re going to aggress in the next one minute, they do.

In this conversation Dan Cohen speaks to Mathew Goodwin about using wearable sensors to predict future episodes of aggressive behaviour in children with autism. The AI is picks up physiological variations in the children that are invisible to human observers and uses those changes to make very accurate predictions about the likelihood of an aggresive incident occuring in the next minute. In other words, the sensor being worn by the child is recording changes in their physiology that any human caregiver would never be able to see and then telling a caregiver, “In one minute the child is going to become aggressive.” For caregivers and parents, one minute is a significant amount of time to either prepare for it or to make efforts to de-escalate and buy more time.

And these are not so-called “black box” algorithms; the researchers can interrogate the data and, by eliminating different variables from the analysis, can make fairly strong claims about what physiological features are predictive of aggressive behaviour. Over time, as the sensors become more sophisticated, lighter, and cheaper, we’re going to see everyone wearing sensors of some kind that provide insights into our behaviour.

We all have periods of feeling stressed, angry or sad without really knowing why. While we may never know precisely why, it looks like we may get to a point where we can know something about how. Imagine getting feedback from a wearable saying that, based on a combination of heart rate, blood pressure, pupil dilation, etc., you’re likely to feel angry within the next 30 seconds and that maybe it would be a good idea to step away from whatever you’re doing and take a few deep breaths. Imagine how that might influence your relationship with your spouse, children and co-workers?

Download the episode transcript.

Categories
AI clinical

Comment: Individuals have unique muscle activation signatures.

We used a machine learning approach to test the uniqueness and robustness of muscle activation patterns. Our results show that activation patterns not only vary between individuals, but are unique to each individual. Individual differences should, therefore, be considered relevant information for addressing fundamental questions about the control of movement.

Hug, F. et al. (2019). Individuals have unique muscle activation signatures as revealed during gait and pedaling. Journal of Applied Physiology.

Machine learning algorithms have been able to identify unique individuals based on their gait pattern for a while. Now we have this study showing that ML can identify individuals from their unique muscle activation patterns. For me the main takeaway is that technology has a level of insight into our bodies that is just going to keep getting better.

As much as we may think that our observations, palpation, and special tests give us useful information to integrate into patient management it’s not even close to the level of detail we can get from machines. I’m fairly convinced that pretty soon we’ll start seeing studies exploring what aspects of physiotherapy assessment are more accurate when conducted by algorithms.

See also, What AI means for the physical exam.

Categories
AI clinical

Article: Resistance to Medical Artificial Intelligence

Across a variety of medical decisions ranging from prevention to diagnosis to treatment, we document a robust reluctance to use medical care delivered by AI providers rather than comparable human providers.

Whereas much is known about medical AI’s accuracy, cost-efficiency, and scalability, little is known about patients’ receptivity to medical AI. Yet patients are the ultimate consumers of medical AI, and will determine its adoption and implementation both directly and indirectly.

Chiara Longoni, Andrea Bonezzi, Carey K Morewedge, Resistance to Medical Artificial Intelligence, Journal of Consumer Research, Volume 46, Issue 4, December 2019, Pages 629–650.

This is a long paper analysing 9 studies that look at patient preferences when comparing health services that are either automated or provided by human beings. I think it’s an important article that covers a wide range of factors that need to be considered in the context of clinical AI. We’re spending a lot of money on research and development into AI-based interventions but we know almost nothing about how patients will engage with it.

Note: This is a nice idea for a study looking at patient preferences in rehabilitation contexts where we’re likely to see the introduction of robots, for example. I’d be interested to know if there are any differences across geography, culture, etc. Let me know if you’re keen to collaborate.

Categories
AI clinical

Human Compatible: Artificial Intelligence and the Problem of Control

Stuart Russell’s newest work, Human Compatible: Artificial Intelligence and the Problem of Control, is a cornerstone piece, alongside Superintelligence and Life 3.0, that articulates the civilization-scale problem we face of aligning machine intelligence with human goals and values. Not only is this a further articulation and development of the AI alignment problem, but Stuart also proposes a novel solution which bring us to a better understanding of what it will take to create beneficial machine intelligence.

Perry, L. (2019). AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell. Future of Life Institute.

The Future of Life Institute podcast series on AI alignment. An interview with Stuart Russell on his new book, Human Compatible: Artificial Intelligence and the Problem of Control.

The control problem is about ensuring that AI is aligned with human values, which is difficult when we can’t really define what these are.

It’s really hard to specify in advance what we mean when we say “human values” because it’s something that’s likely to be different depending on which humans we ask. This is a significant problem in health systems when clinical AI will increasingly make decisions that affect patient outcomes, considering all the points within that system where ethical judgement influences the choices being made. For example:

  • Micro: What is the likely prognosis for this patient? Do we keep them in the expensive ICU considering that the likelihood of survival is 37%, or do we move them onto the ward? Or send them home for palliative care? These all have cost implications that are weighted differently depending on the confidence we have in the predicted prognosis.
  • Macro: How are national health budgets developed? Do we invest more in infrastructure that is high impact (saves lives, usually in younger patients) but which touches relatively few people, or in services (like physiotherapy) that help many more patients improve quality of life but who may be unlikely to contribute to the state’s revenue base?

An example of tool AI is a system that aims to predict who is likely to be readmitted to hospital following discharge. It provides answers but can’t take action.

In the context of tool AI it’s relatively simple to specify what the utility function should be. In other words we can be quite confident that we can simply tell the system what the goal is and then reward it when it achieves that goal. As Russell says, “this works when machines are stupid.” If the AI gets the goal wrong it’s not a big deal because we can reset it and then try to figure out where the mistake happened. Over time we can keep reiterating until the goal that’s achieved by the system starts to approximate the goal we care about.

But at some point we’re going to move towards clinical AI that makes a decision and then acts on it, which is where we need to have a lot more trust that the system is making the “right choice”. In this context, “right” means a choice that’s aligned with human values. For example, we may decide that in certain contexts the cost of an intervention shouldn’t be considered (because it’s the outcome we care about and not the expense), whereas in other contexts we really do want to say that certain interventions are too expensive relative to the expected outcomes.

See here for The Guardian book review of Human Compatible.

Since we can’t specify up front what the “correct” decision in certain kinds of ethical scenarios should be (because the answer is almost always, “it depends”) we need to make sure that clinical AI really is aligned with what we care about. But, if we can’t use formal rules to determine how AI should integrate human values into its decision-making then how do we move towards a point where we can trust the decisions – and actions – taken by machines?

Russell suggests that, rather than begin with the premise that the AI has perfect knowledge of the world and of our preferences, we could begin with an AI that only knows something about our contextual preferences but that it doesn’t understand them. In this context the AI model only has imperfect or partial knowledge of the objective, which means that it can never be certain of whether it has achieved it. This may lead to situations where the AI must always first check in with a human being because it never knows what the full objective is or if it has been achieved.

Instead of building AI that is convinced of the correctness of its knowledge and actions, Russell suggests that we build doubt into our AI-based systems. Considering the high value of doubt in good decision-making, this is probably a good idea.

Categories
AI clinical

Researchers develop an AI system with near-perfect seizure prediction.

…a pair of researchers have created…an AI system that can predict epileptic seizures with 99.6-percent accuracy. Even better, it can do so up to an hour before they occur…giving people enough time to prepare for the attack by taking medication.

Hardawar, D. (2019). Researchers develop an AI system with near-perfect seizure prediction. Engagdget.

The next step will be when the medication is delivered automatically based on the prediction.

Categories
AI clinical

AI outperforms clinicians in triaging post-operative patients for ICUe.

Artificial intelligence correctly triaged 41 of the 50 patients in the study (82%). Surgeons had an accuracy triage rate of 70% (35 patients), intensivists 64% (32 patients), and anaesthesiologists 58% (29 patients). The number of incorrect triage decisions was lowest for AI (18%), followed by 30% for surgeons, 36% for intensivists, and 42% for anaesthesiologists.

Editor’s pick, (2019). AI outperforms clinicians in triaging post-operative patients for ICUe. Medical brief.

These are the kinds of contexts where we’ll increasingly see the use of machine learning algorithms to “provide guidance” to clinicians: high stakes decision-making scenarios where the correct outcome relies on the integration of data from a wide variety of clinical domains that are not optimised for human cognition. It’s just not possible for a human being – or team of human beings – to track the high number of relevant and inter-related variables that influence these kinds of clinical outcomes.

The resulting algorithm included 87 clinical variables and 15 specific criteria related to admission to the ICU within 48 hours of surgery.

Categories
AI clinical

Podcast: What AI means for the physical exam

It’s a very important ritual. If you look at rituals, in general, they are all about crossing a threshold. We marry, we have baptisms, we have funerals—all with ceremony to indicate the crossing of a threshold. If we step back and look at the physical exam, it has all the trappings of ritual.

Verghese, A. (2019). Eric Topol and Abraham Verghese on What AI Means for the Physical Exam. Medicine and the Machine podcast.

A few thoughts after listening to an episode of the Medicine and the Machine podcast.

Almost immediately we get to the notion that there’s very little value in terms of data collection that happens during the physical exam. It’s clear that the validity and reliability of a lot of what we do during the “laying on of hands” is questionable. So far so good. But then the hosts start talking about the value of physical touch as part of a ritual that includes some kind of threshold crossing for the clinician and patient. This is where it starts getting a bit weird.

On the one hand, I agree that there’s a lot of ritual that frames the patient-clinician interaction and that this may even be something that patients look for. On the other hand I don’t think that this is something to be celebrated and which I believe will fall away as AI becomes more tightly integrated into healthcare. You don’t need to conduct a physical exam to signal to the patient that you’re paying attention; you can just pay attention.

Note to self: I think that there’s some potentially fruitful discussion around the links between religion and medicine that might be worth exploring at some point.

I’m also uncomfortable with some of the language used in the episode that’s reminiscent of priests, ceremony, and the mystical, and I don’t know why but it makes me think of a profession that’s in decline. There’s a parallel here if you think of religion that’s under pressure worldwide as the spaces in which God has room to move gets smaller and smaller. Not that medicine is going to go away entirely but the parts of it that try and hold onto the remnants of a past that are no longer relevant are going to become increasingly disconnected to 21st century clinical practice.

If you think that the value of the human being in the patient-clinician encounter is that we need people to enact a ritual, then surely you’ve lost the plot. There are many reasons for why this perspective is problematic but two big ones come to mind:

  1. Rituals are used to create a sense of mystery as part of a ceremony related to threshold crossing. While I think that this has value in some parts of society (e.g. becoming an adult, getting married, etc.) I don’t think it has a place in scientific endeavour.
  2. You don’t need to spend 7 years studying medicine, and then another 5 years specialising, in order to simulate some kind of threshold crossing with a patient.

Having said all that, I think the episode is still worth listening to, even if only to listen Topol and Verghese come up with dubious arguments for why it’s so important for the doctor to remain central to the clinical encounter.

Categories
AI clinical

Podcast series: Medicine and the machine.

A relatively new podcast series, hosted by Medscape, where Eric Topol and Abraham Verghese discuss the implications of artificial intelligence on medicine.

Categories
AI clinical research

Survey: Physiotherapy clinicians’ perceptions of artificial intelligence in clinical practice

We know very little about how physiotherapy clinicians think about the impact of AI-based systems on clinical practice, or how these systems will influence human relationships and professional practice. As a result, we cannot prepare for the changes that are coming to clinical practice and physiotherapy education. The aim of this study is to explore how physiotherapists currently think about the potential impact of artificial intelligence on their own clinical practice.

Earlier this year I registered a project that aims to develop a better understanding of how physiotherapists think about the impact of artificial intelligence in clinical practice. Now I’m ready to move forward with the first phase of the study, which is an online survey of physiotherapy clinicians’ perceptions of AI in professional practice. The second phase will be a series of follow up interviews with survey participants who’d like to discuss the topic in more depth.

I’d like to get as many participants as possible (obviously) so would really appreciate it if you could share the link to the survey with anyone you think might be interested. There are 12 open-ended questions split into 3 sections, with a fourth section for demographic information. Participants don’t need a detailed understanding of artificial intelligence and (I think) I’ve provided enough context to make the questionnaire simple for anyone to complete in about 20 minutes.

Here is a link to the questionnaire: https://forms.gle/HWwX4v7vXyFgMSVLA.

This project has received ethics clearance from the University of the Western Cape (project number: BM/19/3/3).

Categories
AI clinical

Comment: How do we learn to work with intelligent machines?

I discussed something related to this earlier this year (the algorithmic de-skilling of clinicians) and thought that this short presentation added something extra. It’s not just that AI and machine learning have the potential to create scenarios in which qualified clinical experts become de-skilled over time; they will also impact on our ability to teach and learn those skills in the first place.

We’re used to the idea of a novice working closely with a more experienced clinician, and learning from them through observation and questioning (how closely this maps onto reality is a different story). When the tasks usually performed by more experienced clinicians are outsourced to algorithms, who does the novice learn from?

Will clinical supervision consist of talking undergraduate students through the algorithmic decision-making process? Discussing how probabilistic outputs were determined from limited datasets? How to interpret confidence levels of clinical decision-support systems? When clinical decisions are made by AI-based systems in the real-world of clinical practice, what will we lose in the undergraduate clinical programme, and how do we plan on addressing it?