Link: How AI Will Rewire Us

Radical innovations have previously transformed the way humans live together. The advent of cities…meant a less nomadic existence and a higher population density. More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.

As consequential as these innovations were, however, they did not change the fundamental aspects of human behaviour: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching.

But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.

Christakis, N. (2019). How AI Will Rewire Us. The Atlantic.

The author provides a series of experimental outcomes showing how, depending on the nature of the interacting AI, human beings can be made to respond differently to teammates and collaborators. For example, having a bot make minor errors and then apologise can nudge people towards being more compassionate with each other. This should give us pause as we consider how we want to design the systems that we’ll soon be working with.

For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship.


See also: Comment: In competition, people get discouraged by competent robots.

10 recommendations for the ethical use of AI

In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.

  1. Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  2. Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  3. Privacy: Users should be able to easily opt out of data collection.
  4. Diversity: A.I. technology should be developed by inherently diverse teams.
  5. Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  6. Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  7. Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  8. Collective governance: Companies should work together to self-regulate the industry.
  9. Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  10. “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

The list of recommendations seems reasonable enough on the surface, although I wonder how practical they are given the business models of the companies most active in developing AI-based systems. As long as Google, Microsoft, Facebook, etc. are generating the bulk of their revenue from advertising that’s powered by the data we give them, they have little incentive to be transparent, to disclose, to be regulated, etc. If we opt our data out of the AI training pool, the AI is more susceptible to bias and less useful/accurate, so having more data is usually better for algorithm development. And having internal processes to build trust? That seems odd.

However, even though it’s easy to find issues with all of these recommendations it doesn’t mean that they’re not useful. The more of these kinds of conversations we have, the more likely it is that we’ll figure out a way to have AI that positively influences society.

Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

Comment: Why AI is a threat to democracy—and what we can do to stop it

The developmental track of AI is a problem, and every one of us has a stake. You, me, my dad, my next-door neighbor, the guy at the Starbucks that I’m walking past right now. So what should everyday people do? Be more aware of who’s using your data and how. Take a few minutes to read work written by smart people and spend a couple minutes to figure out what it is we’re really talking about. Before you sign your life away and start sharing photos of your children, do that in an informed manner. If you’re okay with what it implies and what it could mean later on, fine, but at least have that knowledge first.

Hao, K. (2019). Why AI is a threat to democracy—and what we can do to stop it. MIT Technology Review.

I agree that we all have a stake in the outcomes of the introduction of AI-based systems, which means that we all have a responsibility in helping to shape it. While most of us can’t be involved in writing code for these systems, we can all be more intentional about what data we provide to companies working on artificial intelligence and how they use that data (on a related note, have you ever wondered just how much data is being collected by Google, for example?). Here are some of the choices I’ve made about the software that I use most frequently:

  • Mobile operating system: I run LineageOS on my phone and tablet, which is based on Android but is modified so that the data on the phone stays on the phone i.e. is not reported back to Google.
  • Desktop/laptop operating system: I’ve used various Ubuntu Linux distributions since 2004, not only because Linux really is a better OS (faster, cheaper, more secure, etc.) but because open-source software is more trustworthy.
  • Browser: I switched from Chrome to Firefox with the release of Quantum, which saw Firefox catch up in performance metrics. With privacy as the default design consideration, it was an easy move to make. You should just switch to Firefox.
  • Email: I’ve looked around – a lot – and can’t find an email provider to replace Gmail. I use various front-ends to manage my email on different devices but that doesn’t get me away from the fact that Google still processes all of my emails on the back-end. I could pay for my email service provider – and there do seem to be good options – but then I’d be paying for email.
  • Search engine: I moved from Google Search to DuckDuckGo about a year ago and can’t say that I miss Google Search all that much. Every now and again I do find that I have to go to Google, especially for images.
  • Photo storage: Again, I’ve looked around for alternatives but the combination of the free service, convenience (automatic upload of photos taken on my phone), unlimited storage (for lower res copies) and the image recognition features built into Google Photos make this very difficult to move away from.
  • To do list: I’ve used Todoist and Any.do on and off for years but eventually moved to Todo.txt because I wanted to have more control over the things that I use on a daily basis. I like the fact that my work is stored in a text file and will be backwards compatible forever.
  • Note taking: I use a combination of Simplenote and Qownnotes for my notes. Simplenote is the equivalent of sticky notes (short-term notes that I make on my phone and delete after acting on them), and Qownnotes is for long-form note-taking and writing that stores notes as text files. Again, I want to control my data and these apps give me that control along with all of the features that I care about.
  • Maps: Google Maps is without equal and is so far ahead of anyone else that it’s very difficult to move away from. However, I’ve also used Here We Go on and off and it’s not bad for simple directions.

From the list above you can see that I pay attention to how my data is stored, shared and used, and that privacy is important to me. I’m not unsophisticated in my use of technology and I still can’t get away from Google for email, photos, and maps, arguably the most important data gathering services that the company provides. Maybe there’s something that I’m missing out but companies like Google, Facebook, Amazon and Microsoft are so entangled in everything that we care about, I really don’t see a way to avoid using their products. The suggestion that users should be more careful about what data they share, and who they share it with, is a useful thought experiment but the practical reality is that it would very difficult indeed to avoid these companies altogether.

Google isn’t only problem. See what Facebook knows about you.

Comment: Facebook says it’s going to make it harder to access anti-vax misinformation

Facebook won’t go as far as banning pages that spread anti-vaccine messages…[but] would make them harder to find. It will do this by reducing their ranking and not including them as recommendations or predictions in search.

Firth. N. (2019). Facebook says it’s going to make it harder to access anti-vax misinformation. MIT Technology Review.

Of course this is a good thing, right? Facebook – already one of the most important ways that people get their information – is going to make it more difficult for readers to find information that opposes vaccination. With the recent outbreak of measles in the United States we need to do more to ensure that searches for “vaccination” don’t also surface results encouraging parents not to vaccinate their children.

But what happens when Facebook (or Google, or Microsoft, or Amazon) start making broader decisions about what information is credible, accurate or fake? That would actually be great if we could trust their algorithms. But trust requires that we’re allowed to see the algorithm (and also that we can understand it, which in most cases, we can’t). In this case, it’s a public health issue and most reasonable people would see that the decision is the “right” one. But when companies tweak their algorithms to privilege certain types of information over other types of information, then I think we need to be concerned. Today we agree with Facebook’s decision but how confident can we be that we’ll still agree tomorrow?

Also, vaccines are awesome.

Comment: Artificial intelligence turns brain activity into speech

People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation.

Servick, K. (2019). Artificial intelligence turns brain activity into speech. Science.

To be clear, this research doesn’t describe the artificial recreation of imagined speech i.e. the internal speech that each of us hears as part of the personal monologue of our own subjective experiences. Rather, it maps the electrical activity in the areas of the brain that are responsible for the articulation of speech as the participant reads or listens to sounds being played back to them. Nonetheless, it’s an important step for patients who have suffered damage to those areas of the brain responsible for speaking.

I also couldn’t help but get excited about the following; when electrical signals from the brain are converted into digital information (as they would have to be here, in order to do the analysis and speech synthesis) then why not also transmit that digital information over wifi? If it’s possible for me to understand you “thinking about saying words”, instead of using your muscles of articulation to actually say them, how long will it be before you can send those words to me over a wireless connection?

Giving algorithms a sense of uncertainty could make them more ethical

The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure and hand the dilemma back to the humans.”

Hao, K. (2019). Giving algorithms a sense of uncertainty could make them more ethical. MIT Technology Review.

I think about clinical reasoning like this; it’s what we call the kind of probabilistic thinking where we take a bunch of – sometimes contradictory – data and try to make a decision that can have varying levels of confidence. For example, “If A, then probably D. But if A and B, then unlikely to be D. If C, then definitely not D”. Algorithms (and novice clinicians) are quite poor at this kind of reasoning, which is why they’ve traditionally not been used for clinical decision-making and ethical reasoning (and why novice clinicians tend not to handle clinical uncertainty very well). But if it turns out that machine learning algorithms are able to manage conditions of uncertainty and provide a range of options that humans can act on, given a wide variety of preferences and contexts, it may be that machines will be one step closer to doing our reasoning for us.

Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

Algorithmic de-skilling of clinical decision-makers

What will we do when we don’t drive most of the time but have a car that hands control to us during an extreme event?

Agrawal, A., Gans, J. & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence.

Before I get to the takehome message, I need to set this up a bit. The way that machine intelligence currently works is that you train an algorithm to recognise patterns in large data sets, often with the help of people who annotate the data in advance. This is known as supervised learning. Sometimes the algorithm can be given data sets that have no annotation (i.e. no supervision), and the output is judged against some criterion and determined to be more or less accurate. This is known as reinforcement learning.

In both cases, the algorithm isn’t trained in the wild but is rather developed within a constrained environment that simulates something of interest in the real world. For example, an algorithm may be trained to deal with uncertainty by playing Starcraft, which mimics the imperfect information state of real-world decision-making. This kind of probabilistic thinking defines many professional decision-making contexts where we have to make a choice but may only be 70% confident that we’re making the right choice.

Eventually, you need to take the algorithm out of the simulated training environment and run it in the real world because this is the only way to find out if it will do what you want it to. In the context of self-driving cars, this represents a high-stakes tradeoff between the benefits of early implementation (more real-world data gathering, more accurate predictions, better autonomous driving capability), and the risks of making the wrong decision (people might die).

Even in a scenario where the algorithm has been trained to very high levels in simulation and then introduced at precisely the right time so as to maximise the learning potential while also minimising risk, it will still hardly ever have been exposed to rare events. We will be in the situation where cars will have autonomy in almost all driving contexts, except those where there is a real risk of someone being hurt or killed. At that moment, because of the limitations of its training, it will hand control of the vehicle back to the driver. And there is the problem. How long will it take for drivers to lose the skills that are necessary for them to make the right choice in that rare event?

Which brings me to my point. Will we see the same loss of skills in the clinical context? Over time, algorithms will take over more and more of our clinical decision-making in much the same way that they’ll take over the responsibilities of a driver. And in almost all situations they’ll make more accurate predictions than a person. However, in some rare cases, the confidence level of the prediction will drop enough to lead to control being handed back to the clinician. Unfortunately, at this point, the clinician likely hasn’t been involved in clinical decision-making for an extended period and so, just when human judgement is determined to be most important, it may also be at it’s most limited.

How will clinicians maintain their clinical decision-making skills at the levels required to take over in rare events, when they are no longer involved in the day-to-day decision-making that hones that same skill?


18 March 2019 Update: The Digital Doctor: Will surgeons lose their skills in the age of automation? AI Med.

Who is planning for the future of physiotherapy?

In the middle ages, cities could spend more than 100 years building a cathedral while at the same time believing that the apocalypse was imminent. They must’ve had a remarkable conviction that commissioning these projects would guarantee them eternal salvation. Compare this to the way we think about planning and design today where, for example, we don’t think more than 3 years into the future simply because that would fall outside of this organisational election cycle. Sometimes it feels like the bulk of the work that a politician does today is to secure the funding that will get them re-elected tomorrow. Where do we see real-world examples of long-term planning that will help guide our decision-making in the present?

A few days ago I spent some time preparing feedback on a draft of the HPCSA minimum requirements for physiotherapy training in South Africa and one of the things that struck me was how much of it was just more-of-the-same. This document is going to inform physiotherapy education and practice for at least the next decade and there was no mention of advances at the cutting edge of medical science and the massive impact that emerging technologies are going to have on clinical practice. Genetic engineering, nanotechnology, artificial intelligence and robotics are starting to drive significant changes in healthcare and it seems that, as a profession, we’re largely oblivious to what’s coming. It’s dawned on me that we have no real plan for the future of physiotherapy (the closest I’ve seen is Dave Nicholls new book, called ironically, The End of Physiotherapy).

What would a good plan look like? In the interests of time, I’m just going to take the high-level suggestions from this article on how the US could improve their planning for AI development and make a short comment on each (I’ve expanded on some of these ideas in my OpenPhysio article on the same topic).

  • Invest more: Fund research into practice innovations that take into account the social, economic, ethical and clinical implications of emerging technologies. Breakthroughs in how we can best utilise emerging technologies as core aspects of physiotherapy practice will come through funded research programmes in universities, especially in the early stages of innovation. We need to take the long-term view that, even if robotics, for example, isn’t having a big impact on physiotherapy today, one day we’ll see things like percussion and massage simply go away. We will also need to fund research on what aspects of the care we provide are really valued by patients (and what they, and funders, will pay for).
  • Prepare for job losses: From the article: “While [emerging technologies] can drive economic growth, it may also accelerate the eradication of some occupations, transform the nature of work in other jobs, and exacerbate economic inequality.” For example, self-driving cars are going to massively drive down the injuries that occur as a result of MVAs. Orthopaedic-related physiotherapy work is, therefore, going to dry up as the patient pool gets smaller. Preventative, personalised medicine will likewise result in dramatic reductions in the incidence of chronic conditions of lifestyle. The “education” component of practice will be outsourced to apps. Even if physiotherapy jobs are not entirely lost, they will certainly be transformed unless we start thinking of how our practice can evolve.
  • Nurture talent: We will need to ensure that we retain and recapture interest in the profession. I’m not sure about other countries but in South Africa, we have a relatively high attrition rate in physiotherapy after a few years of clinical work. The employment prospects and long-term career options, especially in the public health system, are quite poor and many talented physiotherapists leave because they’re bored or frustrated. I recently saw a post on LinkedIn where one of our most promising graduates from 5 years ago is now a property developer. After 4 years of intense study and commitment, and 3 years of clinical practice, he just decided that physiotherapy isn’t where he sees his long-term future. He and many others who have left health care practice represent a deep loss for the profession.
  • Prioritise education: At the undergraduate level we should re-evaluate the curriculum and ensure that it is fit for purpose in the 21st century. How much of our current programmes are concerned with the impact of robotics, nanotechnology, genetic engineering and artificial intelligence? We will need to create space for in-depth development within physiotherapy but also ensure development across disciplines (the so-called T-shaped graduate). Continuing professional development will become increasingly important as more aspects of professional work change and over time, are eradicated. Those who cannot (or will not) continue learning are unlikely to have meaningful long-term careers.
  • Guide regulation: At the moment, progress in emerging technologies is being driven by startups who are funded with venture-capital and whose primary goal is rapid growth to fuel increasing valuations. This ecosystem doesn’t encourage entrepreneurs to limit risks and instead pushes them to “move fast and break things”, which isn’t exactly aligned with the medical imperative to “first do no harm”. Health professionals will need to ensure that technologies that are introduced into clinical practice are first and foremost serving the interests of patients, rather than driving up the value of medical technology startups. If we are not actively involved in regulating these technologies, we are likely to find our practice subject to them.
  • Understand the technology: In order to engage with any of the previous items in the list, we will first need to understand the technologies involved. For example, if you don’t know how the methods of data gathering and analysis can lead to biased algorithmic decision-making, will you be able to argue for why your patient’s health insurance funder shouldn’t make decisions about what interventions you need to provide? We need to ensure that we are not only specialists in clinical practice, but also specialists in how technology will influence clinical practice.

Each of the items in the list above is only very briefly covered here, and each could be the foundation for PhD-level programmes of research. If you’re interested in the future of the profession (and by that I mean you’re someone who wonders what health professional practice will look like in 100 years), I’d love to hear your thoughts. Do you know of anyone who has started building our cathedrals?