I enjoyed reading (March)


The web as a universal standard (Tony Bates): It wasn’t so much the content of this post that triggered my thinking, but the title. I’ve been wondering for a while what a “future-proof” knowledge management database would look like. While I think the most powerful ones will be semantic (e.g. like the KDE desktop integrated with the semantic web), there will also be a place for standardised, text-based media like HTML.


The half-life of facts (Maria Popova):

Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.


How parents normalised password sharing (danah boyd):

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.


Academic research published as a graphic novel (Gareth Morris): Over the past few months I’ve been thinking about different ways for me to share the results of my PhD (other than the papers and conference presentations that were part of the process). I love the idea of using stories to share ideas, but had never thought about presenting research in the form of a graphic novel.



Getting rich off of schoolchildren (David Sirota):

You know how it goes: The pervasive media mythology tells us that the fight over the schoolhouse is supposedly a battle between greedy self-interested teachers who don’t care about children and benevolent billionaire “reformers” whose political activism is solely focused on the welfare of kids. Epitomizing the media narrative, the Wall Street Journal casts the latter in sanitized terms, reimagining the billionaires as philanthropic altruists “pushing for big changes they say will improve public schools.”

The first reason to scoff at this mythology should be obvious: It simply strains credulity to insist that pedagogues who get paid middling wages but nonetheless devote their lives to educating kids care less about those kids than do the Wall Street hedge funders and billionaire CEOs who finance the so-called reform movement. Indeed, to state that pervasive assumption out loud is to reveal how utterly idiotic it really is, and yet it is baked into almost all of today’s coverage of education politics.


The case for user agent extremism (Anil Dash): Anil’s post has some close parallels with this speech by Eben Moglen, that I linked to last month. The idea that, as technology becomes increasingly integrated into our lives, the more control we are losing. We all need to become invested in wresting control of our digital lives and identities back from corporations, although how exactly to do that is a difficult problem.

The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations.


Singularities and nightmares (David Brin):

Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency — establishing robust systems that can deal with almost any problem as it arises.


Is AI near a takeoff point? (J. Storrs Hall):

Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program — with guns) will have to change.


Twitter Weekly Updates for 2011-03-21

Microsoft ignoring standards?

It seems as if the beta release of MS Outlook 2010 has stirred up some controversy around it’s decision to continue using Word’s rendering engine to display HTML emails.  This hasn’t gone down too well in some parts of the community, with some groups of people struggling to accept the fact that MS doesn’t care about standards or their customers.

I hope that MS continues this trend for as long as possible, because the more people who understand that an open and transparent ecosystem benefits everyone, the less likely they are to use proprietary software.

Mozilla Open Education course: seminar 3

Open web tech

Again, I missed this seminar because of poor internet connectivity on the day and am catching up on the audio after the fact.  Here are my notes from the presentation given by Mozilla’s Chris Blizzard.

  1. Open as a concept
  2. Innovation and change = important building blocks
  3. Relevance and why open matters
  4. Repurposing key web technologies

“Open”: what does it mean?  First of all, the opposite of open is not necessarily “closed”…though useful terms, in this context they shouldn’t be seen as polarising.  In the context of the open web, the opposite of open may be thought of as opaque…you don’t understand how it works, can’t see inside it, don’t know how it came about.  Gives a sense of the visual.  Therefore, open could be thought of as “transparent”.

Not requiring permission is an important component of open because it relates to patents, licensing, etc.  Comparison of video codecs like h264 and ogg theora and the difference that open licensing makes with regards permission to use the code.

Side note: all content from this course is available under an open license for anyone to re-purpose for any use.

“Generative” – word that is used widely in academia.  Meaning that through your action you allow others to do something as well. It allows people other than the original creator of the work to change the work and use it for things that the creator didn’t think of, it facilitates the mulitiplication of efforts and exploration.

“Innovation” is over-used in many circles…a black box in which things are improved but where the process is invisible.  The most important characteristic of innovation is that it represents change (both good and bad change).  Intentional disruption = standing up to make a difference in a way that’s going to be uncomfortable…and people are often reluctant to change because it’s uncomfortable.  Setting things up to purposefully be uncomfortable and going up against various interests (possibly commercial or political) who would not benefit from that change.  Setting yourself up against the status quo.  In an open model where you’re trying to encourage change / innovation / disruption, you’re going to run up against issues.

Where does experimentation come from?  Assume that progress and innovation stem from experimentation and failure (learning from our mistakes), it’s important to understand this process as it leads to change.  The core group of contributors to large projects are not necessarily the ones doing the experimenting, it usually comes from the periphery.  How do you set yourself up to have “edges” in the community and be open in order to promote experimentation and innovation?  This disruption is difficult for business to commit to because it’s hard to determine future value in experimentation and innovation.

As messy and painful as it is, the open web has worked well.  Very few other inventions have disrupted communication so comprehensively before the web (maybe the printing press, telephone).  An instantaneous communication network that people are continually changing and re-purposing without having to ask permission from anyone is very important.  The nature of the web made this possible i.e. intentionally built on a model of open technology / software where anyone could make changes without permission.

What makes something open web technology?  Web browser is the gateway to the web and we spend a lot of time using it, therefore it should be comfortable and easy to use.  Can you see the page source to understand how it works?  Being able to look at somebody’s source is part of the transparency / open-ness of the web.  Source is delivered (HTML, Javascript) and compiled / executed locally.  Historical mistake where originally authors were writing simple documents where source didn’t matter as much.  Now, this presents as a learning opportunity where others can see what you’ve done and use it in other ways.  This doesn’t mean that you should copy and paste everything, rather figure out how it works and learn that way.

If you have access to the source you may be able to figure out the API (or the API is open), which means that you can then re-purpose the application.  Twitter is an example…even though it’s only a simple application (status updates), others have figured out how to use it in different, more complex ways because of it’s open API and a whole ecosystem has developed around it. 

Another example is how people have changed Google search by implementing code in the browser, even though Google hasn’t explicitly given that permission.  An example of people using the open-ness of the web to figure things out and make changes that have not explicitly been allowed by an open license.

Key peices of open web technology:

  • HTML = core of open web, describes document structure, content, continually improving and evolving
  • XML = more generalised data management (not as widely used), semantic meaning is important in the open web
  • CSS = controls presentation of content (unlike HTML), can imply visual structure, media context, also implies semantic meaning
  • Images = static visual medium that conveys expression (jpg, png are simple but allows everyone to use), adds context to the open web
  • Javascript = integration of all the other peices, makes the static web dynamic
  • Open video = transparent, generative, not closed implementation of web video (in contrast to Flash), using ogg theora (patent- and royalty-free video codec)

Lyx: separating content and style through document processing

It’s been a while since I posted anything here, mainly because I haven’t read anything interesting in that time, which is mainly because we’ve spent the past month or so gearing up for undergraduate exams.  Now that exams are effectively over, we’re marking…sigh.  Together with the exams, our department is on a writing workshop in the hope that by the end of the year we’ll each have a peer-reviewed article ready for publication.  While this is a great way to bite the bullet and get something out, it does take away time from the more interesting task of finding and blogging about cool stuff.

So it’s the weekend, I have a huge pile of scripts to mark and an article to complete for review on Monday…and here I am, working on this post.  But it’s work-related, so I don’t feel bad.  The reason it’s work-related is because I’ve recently started using a document processor for writing articles, called LyX.  A document processor differs from a word processor (like OpenOffice) in that it attempts to separate the process of writing from the process of typesetting, or formatting.

This separation of content and style is hardly a new concept but has been increasingly evident in the whole Web 2.0 hype that makes use of the idea that content wrapped in meaningful XML tags can be syndicated in almost any form and presented in almost any format.  In the early days of the web, it was also being addressed in the argument against HTML tags that described the formattin of content, rather than it’s structure.  CSS is what allowed that separation to take place, but not to the degree that XML does.  While this isn’t really the place for that discussion, I just wanted to highlight the point that the separation of content and formatting has been an issue since we started using computers to write documents (here’s a great video by Michael Wesch that demonstrates this idea really well).

The earliest word processors gave everyone the power to format content, which could be argued is a good thing because choice is important, right?  While the ability to decide text colour, font size, page margins and the thousand other options present in a word processor may be great for that letter to your mom, it’s almost meaningless when it comes to academic writing, the formatting of which is already determined by either your institution or publisher.  So when I write, why should I have to bother with formatting?

This is where LyX comes in.  By separating the writing process from the typesetting process, Lyx gives the writer the ability to concentrate on writing, rather than mucking about with trying to figure out how to insert and keep track of in-text citations and all the other soul-destroying aspects of computer-based academic writing.  It also allows you to output your document in any of the major formats you require.  For example, my institution uses the APA style of document formatting, so when I’m done writing, I literally press a button that outputs my work to a PDF document, already formatted for publication.

This post has gotten incredibly long, so I’ll end with a few links to more information if you’re interested in checking it out.  A word of warning though, if you’re not used to the idea that content and style are fundamentally different, there’s a steep learning curve when switching to something like Lyx.