The web as a universal standard (Tony Bates): It wasn’t so much the content of this post that triggered my thinking, but the title. I’ve been wondering for a while what a “future-proof” knowledge management database would look like. While I think the most powerful ones will be semantic (e.g. like the KDE desktop integrated with the semantic web), there will also be a place for standardised, text-based media like HTML.
The half-life of facts (Maria Popova):
Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.
How parents normalised password sharing (danah boyd):
When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.
Academic research published as a graphic novel (Gareth Morris): Over the past few months I’ve been thinking about different ways for me to share the results of my PhD (other than the papers and conference presentations that were part of the process). I love the idea of using stories to share ideas, but had never thought about presenting research in the form of a graphic novel.
Getting rich off of schoolchildren (David Sirota):
You know how it goes: The pervasive media mythology tells us that the fight over the schoolhouse is supposedly a battle between greedy self-interested teachers who don’t care about children and benevolent billionaire “reformers” whose political activism is solely focused on the welfare of kids. Epitomizing the media narrative, the Wall Street Journal casts the latter in sanitized terms, reimagining the billionaires as philanthropic altruists “pushing for big changes they say will improve public schools.”
The first reason to scoff at this mythology should be obvious: It simply strains credulity to insist that pedagogues who get paid middling wages but nonetheless devote their lives to educating kids care less about those kids than do the Wall Street hedge funders and billionaire CEOs who finance the so-called reform movement. Indeed, to state that pervasive assumption out loud is to reveal how utterly idiotic it really is, and yet it is baked into almost all of today’s coverage of education politics.
The case for user agent extremism (Anil Dash): Anil’s post has some close parallels with this speech by Eben Moglen, that I linked to last month. The idea that, as technology becomes increasingly integrated into our lives, the more control we are losing. We all need to become invested in wresting control of our digital lives and identities back from corporations, although how exactly to do that is a difficult problem.
The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations.
Singularities and nightmares (David Brin):
Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency — establishing robust systems that can deal with almost any problem as it arises.
Is AI near a takeoff point? (J. Storrs Hall):
Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program — with guns) will have to change.