Categories
AI clinical

Comment: The danger of AI is weirder than you think.

AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they’re optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don’t have any concept of what this content actually is, and they don’t have any concept of what the consequences might be of recommending this content.

Shane, J. (2019). The danger of AI is weirder than you think. TED.

We don’t need to worry about AI that is conscious (yet), only that it is competent and that we’ve given it a poorly considered problem to solve. When we think about the solution space for AI-based systems we need to be aware that the “correct” solution for the algorithm is one that literally solves the problem, regardless of the method.

The danger of AI isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to.

Janelle Shane

This matters in almost every context we care about. Consider the following scenario. ICUs are very expensive for a lot of good reasons; they have a very specialised workforce, a very low staff to patient ratio, the time spent with each patient is very high, and the medication is crazy expensive. We might reasonably ask an AI to reduce the cost of running an ICU, thinking that it could help to develop more efficient workflows, for example. But the algorithm might come to the conclusion that the most cost-effective solution is to kill all the patients. According to the problem we proposed, this isn’t incorrect but it’s clearly not what we were looking for, and any human being on earth, including small children, will understand why.

Before we can ask AI-based systems to help solve problems we care about, we’ll need to first develop a language for communicating with them. A language that includes the common sense parameters that inherently bound all human-human conversation. When I ask a taxi driver to take me to the airport “as quickly as possible”, I don’t also need to specify that we shouldn’t break any rules of driving, and that I’d like to arrive alive. We both understand the boundaries that define the limits of my request. As the video above shows, an AI doesn’t have any “common sense” and this is a major obstacle for progress towards having AI that can address real world problems beyond the narrow contexts where they are currently competent.

By Michael Rowe

I'm a lecturer in the Department of Physiotherapy at the University of the Western Cape in Cape Town, South Africa. I'm interested in technology, education and healthcare and look for places where these things meet.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.