Developing AI will have unintended consequences.

I’m reading the collection of responses to John Brockman’s 2015 question: What to think about machines that think and wanted to share an idea highlighted by Peter Norvig in his short essay called “Design machines to deal with the world’s complexity”.

Pessimists warn that we don’t know how to safely and reliably build large, complex AI systems. They have a valid point. We also don’t know how to safely and reliably build large, complex non-AI systems. We need to do better at predicting, controlling, and mitigating the unintended consequences of the systems we build.

For example, we invented the internal combustion engine 150 years ago, and in many ways it has served humanity well, but it has also led to widespread pollution, political instability over access to oil, more than a million traffic deaths per year, and (some say) a deterioration in the social cohesiveness of neighborhoods.

Norvig, P. (2015). Design machines to deal with the world’s complexity. In, Brockman, J. What to think about machines that think.

There’s a lot of justified concern about how we’re going to use AI in society in general, and healthcare in particular, but I think it’s important to point out that it does us no good to blame algorithms as if they had any agency (I’m talking about narrow, or weak AI, rather than artificial general intelligence, which will almost certainly have agency).

It’s human beings who will make choices about how this technology is used and, as with previous decisions, it’s likely that those choices will have unintended consequences. The next time you read a headline decrying the dangers presented by AI, take a moment to reflect on the dangers presented by human beings.

You can see the entirety of Norvig’s contribution here (all of the responses are public), although note that the book chapters have different titles to the original contributions.