Categories
AI

Extinction Risk from Artificial Intelligence.

…when I started reading about AI safety, I was great at finding reasons to dismiss it. I had already decided that AI couldn’t be a big danger because that just sounded bizarre, and then I searched for ways to justify my unconcern, quite certain that I would find one soon enough.

Cohen, M. (n.d.). Extinction risk from artificial intelligence.

I thought I’d provide some light reading for the weekend by sharing this post about the extinction risks presented by our drive to build Artificial General Intelligence (AGI). Even if you don’t think that AGI is a thing we need to care about you should read this anyway; if nothing else you’ll get some insight into what concerns people about the long-term future of humanity.

For a deeper dive into the topic of the existential risk posed by AGI you might also consider the books Superintelligence by Nick Bostrum or Our final invention by James Barrat, as well as these very detailed posts on The case for reducing extinction risks by Benjamin Todd and The artificial intelligence revolution by Tim Urban.

The post I link to above presents arguments in support of the following claims and then provides short responses to common rebuttals:

  1. Humans will eventually make a human-level intelligence that pursues goals.
  2. That intelligence will quickly surpass human-level intelligence.
  3. At that point, it will be very hard to keep it from connecting to the Internet.
  4. Most goals, when pursued efficiently by an AI connected to the Internet, result in the extinction of biological life.
  5. Most goals that preserve human existence still would not preserve freedom, autonomy, and a number of other things we value.
  6. It is profoundly difficult to give an AI a goal such that it would preserve the things we care about, we can’t even check if a potential goal would be safe, and we have to get AI right on the first attempt.
  7. If someone makes human-level-AI before anyone makes human-level-AI-with-a-safe-goal-structure, we will all die, and as hard as the former is, the latter is much harder.
Urban, T. (2015). The artificial intelligence revolution. Wait but why.

To be honest, I find the argument that an Artificial General Intelligence poses a significant risk to humanity, to be plausible and compelling.

Categories
AI

‘The discourse is unhinged’: how the media gets AI alarmingly wrong

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”. According to Lipton, in recent years broader interest in topics like “machine learning” and “deep learning” has led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic”.

Source: Schwartz, O. (2018). ‘The discourse is unhinged’: how the media gets AI alarmingly wrong.

There’s a lot of confusion around what we think of as AI. For most people who are actually working in the field, the current state of AI and machine learning research present their findings as the solution to very narrowly constrained problems that are the result of the statistical manipulation of large data sets expressed within certain confidence intervals. There’s no talk of consciousness, choice, or values of any kind. To be clear, this is “intelligence” as defined within very specific parameters. It’s important that clinicians and educators (and everyone else, actually) at least understand at a basic level what we mean when we say “artificial intelligence”.

Of course, there are also people working on issues of artificial general intelligence and superintelligence, which is different to the narrow (or weak) intelligence that is being reported when we see today’s sensationalist headlines.