…when I started reading about AI safety, I was great at finding reasons to dismiss it. I had already decided that AI couldn’t be a big danger because that just sounded bizarre, and then I searched for ways to justify my unconcern, quite certain that I would find one soon enough.Cohen, M. (n.d.). Extinction risk from artificial intelligence.
I thought I’d provide some light reading for the weekend by sharing this post about the extinction risks presented by our drive to build Artificial General Intelligence (AGI). Even if you don’t think that AGI is a thing we need to care about you should read this anyway; if nothing else you’ll get some insight into what concerns people about the long-term future of humanity.
For a deeper dive into the topic of the existential risk posed by AGI you might also consider the books Superintelligence by Nick Bostrum or Our final invention by James Barrat, as well as these very detailed posts on The case for reducing extinction risks by Benjamin Todd and The artificial intelligence revolution by Tim Urban.
The post I link to above presents arguments in support of the following claims and then provides short responses to common rebuttals:
- Humans will eventually make a human-level intelligence that pursues goals.
- That intelligence will quickly surpass human-level intelligence.
- At that point, it will be very hard to keep it from connecting to the Internet.
- Most goals, when pursued efficiently by an AI connected to the Internet, result in the extinction of biological life.
- Most goals that preserve human existence still would not preserve freedom, autonomy, and a number of other things we value.
- It is profoundly difficult to give an AI a goal such that it would preserve the things we care about, we can’t even check if a potential goal would be safe, and we have to get AI right on the first attempt.
- If someone makes human-level-AI before anyone makes human-level-AI-with-a-safe-goal-structure, we will all die, and as hard as the former is, the latter is much harder.
To be honest, I find the argument that an Artificial General Intelligence poses a significant risk to humanity, to be plausible and compelling.