Comment: Artificial intelligence turns brain activity into speech

People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation.

Servick, K. (2019). Artificial intelligence turns brain activity into speech. Science.

To be clear, this research doesn’t describe the artificial recreation of imagined speech i.e. the internal speech that each of us hears as part of the personal monologue of our own subjective experiences. Rather, it maps the electrical activity in the areas of the brain that are responsible for the articulation of speech as the participant reads or listens to sounds being played back to them. Nonetheless, it’s an important step for patients who have suffered damage to those areas of the brain responsible for speaking.

I also couldn’t help but get excited about the following; when electrical signals from the brain are converted into digital information (as they would have to be here, in order to do the analysis and speech synthesis) then why not also transmit that digital information over wifi? If it’s possible for me to understand you “thinking about saying words”, instead of using your muscles of articulation to actually say them, how long will it be before you can send those words to me over a wireless connection?

365 project – Beach in Gordon’s Bay

I’ve been taking a photo a day, every day since 01 January and will keep doing this for the rest of the year. I’ve decided that every now and again when I have a picture that I like, I’ll post it here. Like today, for example. We’re at a two-day writing retreat in Gordon’s Bay and I took this picture when I arrived at the venue this morning.

You can see the previous photos in my 365 project here.

Giving algorithms a sense of uncertainty could make them more ethical

The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure and hand the dilemma back to the humans.”

Hao, K. (2019). Giving algorithms a sense of uncertainty could make them more ethical. MIT Technology Review.

I think about clinical reasoning like this; it’s what we call the kind of probabilistic thinking where we take a bunch of – sometimes contradictory – data and try to make a decision that can have varying levels of confidence. For example, “If A, then probably D. But if A and B, then unlikely to be D. If C, then definitely not D”. Algorithms (and novice clinicians) are quite poor at this kind of reasoning, which is why they’ve traditionally not been used for clinical decision-making and ethical reasoning (and why novice clinicians tend not to handle clinical uncertainty very well). But if it turns out that machine learning algorithms are able to manage conditions of uncertainty and provide a range of options that humans can act on, given a wide variety of preferences and contexts, it may be that machines will be one step closer to doing our reasoning for us.

Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

The first In Beta “Experiments in Physiotherapy Education” unconference

The In Beta project may seem to have been quiet for the last few months but the fact is we’ve been busy organising a two-day In Beta unconference that will take place on 14-15 May 2019 at HESAV in Lausanne, Switzerland. If you’re planning on going to the WCPT conference (10-13 May) and have an interest in physiotherapy education, you may want to look into the option of joining us for another two days of discussion and engagement, albeit in a more relaxed, less academic format.

Attendance is free, although you will need to make your own arrangements for travel and accommodation. For more information check out the unconference website and register here.

We are incredibly grateful to Haute Ecole de Santé Vaud for hosting the unconference and providing venues for us over the two days.

Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?

Link: Enlightenment Wars: Some Reflections on ‘Enlightenment Now,’ One Year Later

I’m a big fan of Steven Pinker’s writing (I know that this isn’t fashionable with the social justice warriors, but there it is) and so was really happy to read his 10 000 word response to some of the criticisms of his latest book, Enlightenment Now. While reviews of the book were overwhelmingly positive many bloggers and online commentators really took a dislike to Pinker’s arguments, sometimes seemingly because of who else liked the book (e.g. Bill Gates). In many cases, where Pinker uses data and links to sources to support his claims, his critics generally go for straw man arguments and ad hominem attacks.

Pinker’s response is a long read but it’s also a really good example of how to respond to a critique of your academic work. He doesn’t take it personally and simply does what he is good at, which is marshalling the available evidence to support his arguments. If you like Steven Pinker (and science and rationality in general) you may enjoy this post.

Here is the link: https://quillette.com/2019/01/14/enlightenment-wars-some-reflections-on-enlightenment-now-one-year-later/.

Who is planning for the future of physiotherapy?

In the middle ages, cities could spend more than 100 years building a cathedral while at the same time believing that the apocalypse was imminent. They must’ve had a remarkable conviction that commissioning these projects would guarantee them eternal salvation. Compare this to the way we think about planning and design today where, for example, we don’t think more than 3 years into the future simply because that would fall outside of this organisational election cycle. Sometimes it feels like the bulk of the work that a politician does today is to secure the funding that will get them re-elected tomorrow. Where do we see real-world examples of long-term planning that will help guide our decision-making in the present?

A few days ago I spent some time preparing feedback on a draft of the HPCSA minimum requirements for physiotherapy training in South Africa and one of the things that struck me was how much of it was just more-of-the-same. This document is going to inform physiotherapy education and practice for at least the next decade and there was no mention of advances at the cutting edge of medical science and the massive impact that emerging technologies are going to have on clinical practice. Genetic engineering, nanotechnology, artificial intelligence and robotics are starting to drive significant changes in healthcare and it seems that, as a profession, we’re largely oblivious to what’s coming. It’s dawned on me that we have no real plan for the future of physiotherapy (the closest I’ve seen is Dave Nicholls new book, called ironically, The End of Physiotherapy).

What would a good plan look like? In the interests of time, I’m just going to take the high-level suggestions from this article on how the US could improve their planning for AI development and make a short comment on each (I’ve expanded on some of these ideas in my OpenPhysio article on the same topic).

  • Invest more: Fund research into practice innovations that take into account the social, economic, ethical and clinical implications of emerging technologies. Breakthroughs in how we can best utilise emerging technologies as core aspects of physiotherapy practice will come through funded research programmes in universities, especially in the early stages of innovation. We need to take the long-term view that, even if robotics, for example, isn’t having a big impact on physiotherapy today, one day we’ll see things like percussion and massage simply go away. We will also need to fund research on what aspects of the care we provide are really valued by patients (and what they, and funders, will pay for).
  • Prepare for job losses: From the article: “While [emerging technologies] can drive economic growth, it may also accelerate the eradication of some occupations, transform the nature of work in other jobs, and exacerbate economic inequality.” For example, self-driving cars are going to massively drive down the injuries that occur as a result of MVAs. Orthopaedic-related physiotherapy work is, therefore, going to dry up as the patient pool gets smaller. Preventative, personalised medicine will likewise result in dramatic reductions in the incidence of chronic conditions of lifestyle. The “education” component of practice will be outsourced to apps. Even if physiotherapy jobs are not entirely lost, they will certainly be transformed unless we start thinking of how our practice can evolve.
  • Nurture talent: We will need to ensure that we retain and recapture interest in the profession. I’m not sure about other countries but in South Africa, we have a relatively high attrition rate in physiotherapy after a few years of clinical work. The employment prospects and long-term career options, especially in the public health system, are quite poor and many talented physiotherapists leave because they’re bored or frustrated. I recently saw a post on LinkedIn where one of our most promising graduates from 5 years ago is now a property developer. After 4 years of intense study and commitment, and 3 years of clinical practice, he just decided that physiotherapy isn’t where he sees his long-term future. He and many others who have left health care practice represent a deep loss for the profession.
  • Prioritise education: At the undergraduate level we should re-evaluate the curriculum and ensure that it is fit for purpose in the 21st century. How much of our current programmes are concerned with the impact of robotics, nanotechnology, genetic engineering and artificial intelligence? We will need to create space for in-depth development within physiotherapy but also ensure development across disciplines (the so-called T-shaped graduate). Continuing professional development will become increasingly important as more aspects of professional work change and over time, are eradicated. Those who cannot (or will not) continue learning are unlikely to have meaningful long-term careers.
  • Guide regulation: At the moment, progress in emerging technologies is being driven by startups who are funded with venture-capital and whose primary goal is rapid growth to fuel increasing valuations. This ecosystem doesn’t encourage entrepreneurs to limit risks and instead pushes them to “move fast and break things”, which isn’t exactly aligned with the medical imperative to “first do no harm”. Health professionals will need to ensure that technologies that are introduced into clinical practice are first and foremost serving the interests of patients, rather than driving up the value of medical technology startups. If we are not actively involved in regulating these technologies, we are likely to find our practice subject to them.
  • Understand the technology: In order to engage with any of the previous items in the list, we will first need to understand the technologies involved. For example, if you don’t know how the methods of data gathering and analysis can lead to biased algorithmic decision-making, will you be able to argue for why your patient’s health insurance funder shouldn’t make decisions about what interventions you need to provide? We need to ensure that we are not only specialists in clinical practice, but also specialists in how technology will influence clinical practice.

Each of the items in the list above is only very briefly covered here, and each could be the foundation for PhD-level programmes of research. If you’re interested in the future of the profession (and by that I mean you’re someone who wonders what health professional practice will look like in 100 years), I’d love to hear your thoughts. Do you know of anyone who has started building our cathedrals?

%d bloggers like this: