Categories
reading

I enjoyed reading (January)

This post is also a bit delayed, but I’m OK with that. During January I found myself reading a bit more than usual about robots, androids, augmented reality and related topics. I’m not sure why it worked out that way, but this collection is more or less representative of what I found interesting during that time. Interestingly, I realised that a common thread throughout this theme are that they’re pretty much related to three books by Daniel Suarez; Daemon, Freedom, and Kill Decision. If you enjoy this kind of thing, you have to read them.

I, Glasshole: My Year With Google Glass (Mat Honan): I’m fascinated with the concept of wearable, context-aware devices and services, of which Glass is simply the most well-known. I think that the ability to overlay digital information on top of the reality we perceive represents an astounding change in how we experience the world.

For much of 2013, I wore the future across my brow, a true Glasshole peering uncertainly into the post-screen world. I’m not out here all alone, at least not for long. The future is coming to your face too. And your wrist. Hell, it might even be in your clothes. You’re going to be wearing the future all over yourself, and soon. When it comes to wearable computing, it’s no longer a question of if it happens, only when and why and can you get in front of it to stop it with a ball-pein hammer? (Answers: Soon. Because it is incredibly convenient. Probably not.) In a few years, we might all be Glassholes. But in 2013, maybe for the last time, I was in dubiously exclusive face-computing company.

Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure (Steve Ranger): The idea of fully autonomous robots that are able to make decisions in critical situations is both disturbing and appealing to me. Disturbing because embedding a moral framework that can deal with the complexity of warfare is ethically problematic. Appealing because in many situations, robots may actually be able to make better decisions than human beings (think of self-driving cars).

While fully autonomous robot weapons might not be deployed for two or three decades, the International Committee for Robot Arms Control (ICRAC), an international group of academics and experts concerned about the implications of a robot arms race, argues a prohibition on the development and deployment of autonomous weapons systems is the correct approach. “Machines should not be allowed to make the decision to kill people,” it states.

Better Than Human: Why Robots Will — And Must — Take Our Jobs (Kevin Kelly): Kevin Kelly’s article, We are the web, was one of the first things I read that profoundly changed the way I think about the internet. Needless to say, I almost always find his thoughts on technology to be insightful and thought-provoking.

All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don’t call it that. Witness one piece of software by Narrative Science (profiled in issue 20.05) that can write newspaper stories about sports games directly from the games’ stats or generate a synopsis of a company’s stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by bots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. The rote tasks of any information-intensive job can be automated. It doesn’t matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.

And it has already begun.

A review of Her (Ray Kurzweil): Kurweil’s thinking on the merging of human beings with technology is fascinating. If you’re interested in this topic, the collection of essays on his blog is awesome.

With emerging eye-mounted displays that project images onto the wearer’s retinas and also look out at the world, we will indeed soon be able to do exactly that. When we send nanobots into the brain — a circa-2030s scenario by my timeline — we will be able to do this with all of the senses, and even intercept other people’s emotional responses.

By Michael Rowe

I'm a lecturer in the Department of Physiotherapy at the University of the Western Cape in Cape Town, South Africa. I'm interested in technology, education and healthcare and look for places where these things meet.