The AI Threat to Democracy

With the advent of strong reinforcement learning…, goal-oriented strategic AI is now very much a reality. The difference is one of categories, not increments. While a supervised learning system relies upon the metrics fed to it by humans to come up with meaningful predictions and lacks all capacity for goal-oriented strategic thinking, reinforcement learning systems possess an open-ended utility function and can strategize continuously on how to fulfil it.

Source: Krumins, A. (2018). The AI Threat to Democracy.

“…an open-ended utility function” means that the algorithm is given a goal state and then left to it’s own devices to figure out how best to optimise towards that goal. It does this by trying a solution and seeing if it got closer to the goal. Every step that moves the algorithm closer to the goal state is rewarded (typically by a token that the algorithm is conditioned to value). In other words, an RL algorithm takes actions to maximise reward. Consequently, it represents a fundamentally different approach to problem-solving than supervised learning, which requires human intervention to tell the algorithm whether or not it’s conclusions are valid.

In the video below, a Deepmind researcher uses AlphaGo and AlphaGo Zero to illustrate the difference between supervised and reinforcement learning.

This is both exciting and a bit unsettling. Exciting because it means that an AI-based system could iteratively solve for problems that we don’t yet know how to solve ourselves. This has implications for the really big, complex challenges we face, like climate change. On the other hand, we should probably start thinking very carefully about the goal states that we ask RL algorithms to optimise towards, especially since we’re not specifying up front what path the system should take to reach the goal, and we have no idea if the algorithm will take human values into consideration when making choices about achieving its goal. We may be at a point where the paperclip maximiser is no longer just a weird thought experiment.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrum, N. (2003). Ethical Issues in Advanced Artificial Intelligence.

We may end up choosing goal states without specifying in advance what paths the algorithm should not take because they would be unaligned with human values. Like the problem that Mickey faces in the Sorcerer’s Apprentice, the unintended consequences of our choices with reinforcement learning may be truly significant.

a16z Podcast: Revenge of the Algorithms (Over Data)… Go! No?

An interesting (and sane) conversation about the defeat of AlphaGo by AlphaGo Zero. It almost completely avoids the science-fiction-y media coverage that tends to emphasise the potential for artificial general intelligence and instead focuses on the following key points:

  • Go is a stupendously difficult board game for computers to play but it’s a game in which both players have total information and where the rules are relatively simple. This does not reflect the situation in any real-world decision-making scenario. Correspondingly, this is necessarily a very narrow definition of what an intelligent machine can do.
  • AlphaGo Zero represents an order of magnitude improvement in algorithmic modelling and power consumption. In other words, it does a lot more with a lot less.
  • Related to this, AlphaGo Zero started from scratch, with humans providing only the rules of the game. So Zero used reinforcement learning (rather than supervised learning) to figure out the same (and in some cases, better) moves than human beings have done over the last thousand years or so).
  • It’s an exciting achievement but shouldn’t be conflated with any significant step towards machine intelligence that transfers beyond highly constrained scenarios.

Here’s the abstract from the publication in Nature:

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

DeepMind’s latest AI breakthrough is its most significant yet

“We reached a superior level of performance after training for just 72 hours with AlphaGo Zero,” he says. Only 4.9 million simulated games were needed to train Zero, compared to the original AlphaGo’s 30 million. After the three days of learning Zero was able to defeat the Lee Sedol-conquering version 100-0. After it had been playing the game for 40 days, Zero defeated DeepMind’s previous strongest version of AlphaGo, called Master, which defeated Chinese master Ke Jie in May… Additionally, the new system only uses one neural network instead of two and four of Google’s AI processors compared to the 48 needed to beat Lee.

“It is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules,” the research paper concludes. The system learned common human moves and tactics and supplemented them with its own, more efficient moves. “It found these human moves, it tried them and then ultimately it found something it prefers,” Silver says.

Source: Burgess, M. (2017). DeepMind’s latest AI breakthrough is its most significant yet.

This article really gives a sense of the pace of progress in the area of machine learning (in this case, reinforcement learning). Yes, algorithms are limited. No, they cannot generalise across contexts. And they only work with respect to very specific tasks in very narrowly constrained situations. The point is that they always get better and the rate of change is accelerating.