I enjoyed reading (January)

This post is also a bit delayed, but I’m OK with that. During January I found myself reading a bit more than usual about robots, androids, augmented reality and related topics. I’m not sure why it worked out that way, but this collection is more or less representative of what I found interesting during that time. Interestingly, I realised that a common thread throughout this theme are that they’re pretty much related to three books by Daniel Suarez; Daemon, Freedom, and Kill Decision. If you enjoy this kind of thing, you have to read them.

I, Glasshole: My Year With Google Glass (Mat Honan): I’m fascinated with the concept of wearable, context-aware devices and services, of which Glass is simply the most well-known. I think that the ability to overlay digital information on top of the reality we perceive represents an astounding change in how we experience the world.

For much of 2013, I wore the future across my brow, a true Glasshole peering uncertainly into the post-screen world. I’m not out here all alone, at least not for long. The future is coming to your face too. And your wrist. Hell, it might even be in your clothes. You’re going to be wearing the future all over yourself, and soon. When it comes to wearable computing, it’s no longer a question of if it happens, only when and why and can you get in front of it to stop it with a ball-pein hammer? (Answers: Soon. Because it is incredibly convenient. Probably not.) In a few years, we might all be Glassholes. But in 2013, maybe for the last time, I was in dubiously exclusive face-computing company.

Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure (Steve Ranger): The idea of fully autonomous robots that are able to make decisions in critical situations is both disturbing and appealing to me. Disturbing because embedding a moral framework that can deal with the complexity of warfare is ethically problematic. Appealing because in many situations, robots may actually be able to make better decisions than human beings (think of self-driving cars).

While fully autonomous robot weapons might not be deployed for two or three decades, the International Committee for Robot Arms Control (ICRAC), an international group of academics and experts concerned about the implications of a robot arms race, argues a prohibition on the development and deployment of autonomous weapons systems is the correct approach. “Machines should not be allowed to make the decision to kill people,” it states.

Better Than Human: Why Robots Will — And Must — Take Our Jobs (Kevin Kelly): Kevin Kelly’s article, We are the web, was one of the first things I read that profoundly changed the way I think about the internet. Needless to say, I almost always find his thoughts on technology to be insightful and thought-provoking.

All the while, robots will continue their migration into white-collar work. We already have artificial intelligence in many of our machines; we just don’t call it that. Witness one piece of software by Narrative Science (profiled in issue 20.05) that can write newspaper stories about sports games directly from the games’ stats or generate a synopsis of a company’s stock performance each day from bits of text around the web. Any job dealing with reams of paperwork will be taken over by bots, including much of medicine. Even those areas of medicine not defined by paperwork, such as surgery, are becoming increasingly robotic. The rote tasks of any information-intensive job can be automated. It doesn’t matter if you are a doctor, lawyer, architect, reporter, or even programmer: The robot takeover will be epic.

And it has already begun.

A review of Her (Ray Kurzweil): Kurweil’s thinking on the merging of human beings with technology is fascinating. If you’re interested in this topic, the collection of essays on his blog is awesome.

With emerging eye-mounted displays that project images onto the wearer’s retinas and also look out at the world, we will indeed soon be able to do exactly that. When we send nanobots into the brain — a circa-2030s scenario by my timeline — we will be able to do this with all of the senses, and even intercept other people’s emotional responses.


I enjoyed reading (March)


The web as a universal standard (Tony Bates): It wasn’t so much the content of this post that triggered my thinking, but the title. I’ve been wondering for a while what a “future-proof” knowledge management database would look like. While I think the most powerful ones will be semantic (e.g. like the KDE desktop integrated with the semantic web), there will also be a place for standardised, text-based media like HTML.


The half-life of facts (Maria Popova):

Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.


How parents normalised password sharing (danah boyd):

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.


Academic research published as a graphic novel (Gareth Morris): Over the past few months I’ve been thinking about different ways for me to share the results of my PhD (other than the papers and conference presentations that were part of the process). I love the idea of using stories to share ideas, but had never thought about presenting research in the form of a graphic novel.



Getting rich off of schoolchildren (David Sirota):

You know how it goes: The pervasive media mythology tells us that the fight over the schoolhouse is supposedly a battle between greedy self-interested teachers who don’t care about children and benevolent billionaire “reformers” whose political activism is solely focused on the welfare of kids. Epitomizing the media narrative, the Wall Street Journal casts the latter in sanitized terms, reimagining the billionaires as philanthropic altruists “pushing for big changes they say will improve public schools.”

The first reason to scoff at this mythology should be obvious: It simply strains credulity to insist that pedagogues who get paid middling wages but nonetheless devote their lives to educating kids care less about those kids than do the Wall Street hedge funders and billionaire CEOs who finance the so-called reform movement. Indeed, to state that pervasive assumption out loud is to reveal how utterly idiotic it really is, and yet it is baked into almost all of today’s coverage of education politics.


The case for user agent extremism (Anil Dash): Anil’s post has some close parallels with this speech by Eben Moglen, that I linked to last month. The idea that, as technology becomes increasingly integrated into our lives, the more control we are losing. We all need to become invested in wresting control of our digital lives and identities back from corporations, although how exactly to do that is a difficult problem.

The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations.


Singularities and nightmares (David Brin):

Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency — establishing robust systems that can deal with almost any problem as it arises.


Is AI near a takeoff point? (J. Storrs Hall):

Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program — with guns) will have to change.