AI clinical

Comment: The danger of AI is weirder than you think.

AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they’re optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don’t have any concept of what this content actually is, and they don’t have any concept of what the consequences might be of recommending this content.

Shane, J. (2019). The danger of AI is weirder than you think. TED.

We don’t need to worry about AI that is conscious (yet), only that it is competent and that we’ve given it a poorly considered problem to solve. When we think about the solution space for AI-based systems we need to be aware that the “correct” solution for the algorithm is one that literally solves the problem, regardless of the method.

The danger of AI isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to.

Janelle Shane

This matters in almost every context we care about. Consider the following scenario. ICUs are very expensive for a lot of good reasons; they have a very specialised workforce, a very low staff to patient ratio, the time spent with each patient is very high, and the medication is crazy expensive. We might reasonably ask an AI to reduce the cost of running an ICU, thinking that it could help to develop more efficient workflows, for example. But the algorithm might come to the conclusion that the most cost-effective solution is to kill all the patients. According to the problem we proposed, this isn’t incorrect but it’s clearly not what we were looking for, and any human being on earth, including small children, will understand why.

Before we can ask AI-based systems to help solve problems we care about, we’ll need to first develop a language for communicating with them. A language that includes the common sense parameters that inherently bound all human-human conversation. When I ask a taxi driver to take me to the airport “as quickly as possible”, I don’t also need to specify that we shouldn’t break any rules of driving, and that I’d like to arrive alive. We both understand the boundaries that define the limits of my request. As the video above shows, an AI doesn’t have any “common sense” and this is a major obstacle for progress towards having AI that can address real world problems beyond the narrow contexts where they are currently competent.

AI ethics

Comment: Will robots have rights in the future?

If we get to create robots that are also capable of feeling pain then that will be somewhere else that we have to push the circle of moral concern backwards because I certainly think we would have to include them in our moral concern once we’ve actually created beings with capacities, desires, wants, enjoyments, miseries that are similar to ours.

Singer, P. (2019). Will robots have rights in the future? Big Think.

Peter Singer makes a compelling argument that sentient robots (this is assuming we get to the stage where we develop Artificial General Intelligence) ought to be treated in the same way that we treat each other, since they would exhibit the same capacity for pain, desire, joy, etc. as human beings.

I’m interested in what happens when we push the moral boundary further though, since there’s no reason to think that human beings represent any kind of ceiling on what’s possible when it comes to what can be felt and experienced. Will artificially created sentient beings deserve “more” or different rights than human beings, based on their increased capacity for experiencing a wider range of feelings than what is available to us? Will it get to the point where we are to AI-based systems what pigs are to us?