Categories
AI

Extinction Risk from Artificial Intelligence.

…when I started reading about AI safety, I was great at finding reasons to dismiss it. I had already decided that AI couldn’t be a big danger because that just sounded bizarre, and then I searched for ways to justify my unconcern, quite certain that I would find one soon enough.

Cohen, M. (n.d.). Extinction risk from artificial intelligence.

I thought I’d provide some light reading for the weekend by sharing this post about the extinction risks presented by our drive to build Artificial General Intelligence (AGI). Even if you don’t think that AGI is a thing we need to care about you should read this anyway; if nothing else you’ll get some insight into what concerns people about the long-term future of humanity.

For a deeper dive into the topic of the existential risk posed by AGI you might also consider the books Superintelligence by Nick Bostrum or Our final invention by James Barrat, as well as these very detailed posts on The case for reducing extinction risks by Benjamin Todd and The artificial intelligence revolution by Tim Urban.

The post I link to above presents arguments in support of the following claims and then provides short responses to common rebuttals:

  1. Humans will eventually make a human-level intelligence that pursues goals.
  2. That intelligence will quickly surpass human-level intelligence.
  3. At that point, it will be very hard to keep it from connecting to the Internet.
  4. Most goals, when pursued efficiently by an AI connected to the Internet, result in the extinction of biological life.
  5. Most goals that preserve human existence still would not preserve freedom, autonomy, and a number of other things we value.
  6. It is profoundly difficult to give an AI a goal such that it would preserve the things we care about, we can’t even check if a potential goal would be safe, and we have to get AI right on the first attempt.
  7. If someone makes human-level-AI before anyone makes human-level-AI-with-a-safe-goal-structure, we will all die, and as hard as the former is, the latter is much harder.
Urban, T. (2015). The artificial intelligence revolution. Wait but why.

To be honest, I find the argument that an Artificial General Intelligence poses a significant risk to humanity, to be plausible and compelling.

Categories
learning

Teaching, learning and risk

I’ve had these ideas bouncing around in my head for a week or so and finally have a few minutes to try and get them out. I’ve been wondering why changing practice – in higher education and the clinical context – is so hard, and one way that I think I can make some sense out of it is to use the idea of risk.

To change anything is to take a risk where we don’t know what the outcome will be. We risk messing up something that kind-of-works-OK and replacing it with something that could be worse. To change our practice is to risk moving into spaces we might find uncomfortable. To take a risk is to make a decision that you’re OK with not knowing; to be OK with not understanding; to be OK with uncertainty. And many of us are really not OK with any of those things. And so we resist the change because when we don’t take the risk we’re choosing to be safe. I get that.

But the irony is that we ask our students to take risks every single day because to learn is to risk. Learning is partly about making yourself vulnerable by admitting – to yourself and others – that there is something you don’t know. And to be vulnerable is to risk being hurt. We expect our students to move into those uncomfortable spaces where they have take ownership of not knowing and of being uncertain.”Put your hand up if you don’t know.” To put your hand up and announce – to everyone – that you don’t have the answer is really risky.

Why is it OK for us to ask students to put themselves at risk if we’re not prepared to do the same. If my students must put their hands up and announce their ignorance, why don’t I? If change is about risk and so is learning, is it reasonable to ask if changing is about learning? And if that’s true, what does it say about those of us who resist change?