UCT seminar: Shaping our algorithms

Tomorrow I’ll be presenting a short seminar at the University of Cape Town on a book chapter that was published earlier this year, called Shaping our algorithms before they shape us. Here are the slides I’ll be using, which I think are a useful summary of the chapter itself.

This slideshow requires JavaScript.

Comment: Nvidia AI Turns Doodles Into Realistic Landscapes

Nvidia has shown that AI can use a simple representation of a landscape to render a photorealistic vista that doesn’t exist anywhere in the real world… It has just three tools: a paint bucket, a pen, and a paintbrush. After selecting your tool, you click on a material type at the bottom of the screen. Material types include things like tree, river, hill, mountain, rock, and sky. The organization of materials in the sketch tells the software what each part of the doodle is supposed to represent, and it generates a realistic version of it in real time.

Whitwam, R. (2019). Nvidia AI Turns Doodles Into Realistic Landscapes. Extreme Tech.

You may be tempted think of this as substitution, where the algorithm looks at the shape you draw, notes the “material” it represents (e.g. a mountain) and then matches it to an image of that thing that already exists. But that’s not what’s happening here. The AI is creating a completely new version of what you’ve specified, based on what it knows that thing to look like.

So when you say that this shape is a mountain, it has a general concept of “mountain”, which it uses to create something new. If it were a simple substitution, the algorithm would need you to draw a shape that corresponds to an existing feature of the world. I suppose you could argue that this isn’t real creativity but I think you’d be hard-pressed to say that it’s not moving in that direction. The problem (IMO) with every argument saying that AI is not creative, is that these things only ever get better. It may not conform to the definition of creativity that you’re using today, but tomorrow it will.

The evolution of Atlas from Boston Dynamics

This overview of the changes in capabilities of the Atlas humanoid robot from Boston Dynamics is both fascinating and bit unsettling. In 5 years Atlas has gone from struggling to stand on one leg, to walking on uneven surfaces, to running on uneven surfaces, to doing backflips and now, in October 2018, to bounding up a staggered series of wooden platforms. It’s worth noting that very few human beings would be able to accomplish this last feat.

According to Boston Dynamics, Atlas’ software uses all parts of the body to generate the necessary force to propel the robot up the platforms. The most impressive part of the last demo is the fact that “...Atlas uses computer vision and visible markers on the platforms to decide when and how to shift it‘s weight. So, it’s not just executing a program, it’s making it up as it goes along.” In other words, Atlas is making real-time decisions about how to move, based on what it sees in front of it. No-one has told it what to do.

The profound implication of this is that these things are only ever going to get better, and the rate of change is going to increase. Now that they’ve solved “balance”, “walking”, “running”, and “jumping”, what will Boston Dynamics turn to next? Once Atlas has achieved parity with human performance it’s only a matter of time before it’s superhuman in every physical ability we care about.

‘The discourse is unhinged’: how the media gets AI alarmingly wrong

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”. According to Lipton, in recent years broader interest in topics like “machine learning” and “deep learning” has led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic”.

Source: Schwartz, O. (2018). ‘The discourse is unhinged’: how the media gets AI alarmingly wrong.

There’s a lot of confusion around what we think of as AI. For most people who are actually working in the field, the current state of AI and machine learning research present their findings as the solution to very narrowly constrained problems that are the result of the statistical manipulation of large data sets expressed within certain confidence intervals. There’s no talk of consciousness, choice, or values of any kind. To be clear, this is “intelligence” as defined within very specific parameters. It’s important that clinicians and educators (and everyone else, actually) at least understand at a basic level what we mean when we say “artificial intelligence”.

Of course, there are also people working on issues of artificial general intelligence and superintelligence, which is different to the narrow (or weak) intelligence that is being reported when we see today’s sensationalist headlines.

Robots in the classroom? Preparing for the automation of teaching | BERA

Agendas around AI and education have been dominated by technology designers and vendors, business interests and corporate reformers. There is a clear need for vigorous responses from educators, students, parents and other groups with a stake in public education. What do we all want from our education systems as AI-driven automation becomes more prominent across society?

Source: Robots in the classroom? Preparing for the automation of teaching | BERA

We need teachers, clinicians, and clinician educators involved in the process of designing, developing, implementing and evaluating AI-based systems in the higher education and clinical context. As long as the agenda for 21st century education and clinical care is driven by corporate interests (and how could it not, given the enormous commercial value of AI), it’s likely that those responsible for teaching the next generation of health professionals will be passive recipients of algorithmic decision-making rather than empowered participants in their design.

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.

OpenPhysio abstract: Artificial intelligence in clinical practice – Implications for physiotherapy education

Here is the abstract of a paper I recently submitted to OpenPhysio, a new open-access journal with an emphasis on physiotherapy education.

About 200 years ago the invention of the steam engine ushered in an era of unprecedented development and growth in human social and economic systems, whereby human labour was supplanted by machines. The recent emergence of artificially intelligent machines has seen human cognitive capacity augmented by computational agents that are able to recognise previously hidden patterns within massive data sets. The characteristics of this second machine age are already influencing all aspects of society, creating the conditions for disruption to our social, economic, education, health, legal and moral systems, and which will likely to have a far greater impact on human progress than did the steam engine. As AI-based technology becomes increasingly embedded within devices, people and systems, the fundamental nature of clinical practice will evolve, resulting in a healthcare system requiring profound changes to physiotherapy education. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and the newly-constituted care teams that will emerge. This paper describes some of the possible influences of AI-based technologies on physiotherapy practice, and the subsequent ways in which physiotherapy education will need to change in order to graduate professionals who are fit for practice in a 21st century health system.

Read the full paper at OpenPhysio (note that this article is still under review).

altPhysio | Technology as infrastructure

This is the fourth post in my altPhysio series, where I’m exploring alternative ways of thinking about a physiotherapy curriculum by imagining what a future school might look like. This post is a bit longer than other because this is an area I’m really interested in and spend a lot of time thinking about. I’ve also added more links to external sources because some of this stuff sounds like science fiction. The irony is that everything in this post describes technology that currently exists, and as long as we’re thinking about whether or not to share PowerPoint slide we’re not paying attention to what’s important. This post was a ton of fun to write.

Q: Can you talk a little bit about the history of technology integration in health professions education? Maybe over the last decade or so.

In the early part of the 21st century we saw more institutions starting to take the integration of technology seriously. Unfortunately the primary use of digital services at the time was about moving content around more efficiently. Even though the research was saying that the content component was less important for learning than the communication component, we still saw universities using the LMS primarily to share notes and presentations with students.

The other thing is that we were always about 5-10 years behind the curve when it came to the adoption of technology. For examples, wikis started showing up in the medical education literature almost 10 years after they were invented. The same with MOOCs. I understand the need to wait and see how technologies stabilise and then choosing something that’s robust and reliable. But the challenge is that you lose out on the early mover advantages of using the technology early. That’s why we tend to adopt a startup mentality to how we use technology at altPhysio.

Q: What do you mean by that? How is altPhysio like a startup?

We pay attention to what’s on the horizon, especially the emerging technologies that have the potential to make an impact on learning in 1, 2 and 5 year time frames. We decided that we weren’t going to wait and see what technologies stabilised and would rather integrate the most advanced technologies available at the time. We designed our programme to be flexible and to adapt to change based on what’s happening around us. When the future is unknowable because technological advances are happening faster than you can anticipate, you need a system that can adapt to the situations that emerge. We can’t design a rigid curriculum that attempts to guess what the future holds. So we implement and evaluate rapidly, constantly trying out small experiments with small groups of students.

Once we decided that we’d be proactive instead of reactive in how we use and think about technology, we realised that we’d need a small team in the school who are on the lookout for technologies that have the potential to enhance the curriculum. The team consists of students and staff who identify emerging technologies before they become mainstream, prepare short reports for the rest of the school, recruit beta testers and plan small scale research projects that highlight the potential benefits and challenges of implementing the technology at scale.

We’ve found that this is a great way for students to invest themselves in their own learning, drive research in areas they are interested in, take leadership roles and manage small projects. Staff on the team act as supervisors and mentors, but in fact are often students themselves, as both groups push each other further in terms of developing insights that would not be possible working in isolation.

Q: But why the emphasis on technology in health professions education? Isn’t this programme about developing physiotherapists?

The WHO report on the use of elearning for undergraduate health professional education called for the integration of technology into the curriculum, as did the Lancet Commission report. And it wasn’t just about moving content more efficiently in the system but rather to use technology intentionally to change how we think about the curriculum and student learning. The ability to learn is increasingly mediated by digital and information literacy and we want our students’ learning potential to be maximised.

Low levels of digital literacy in the 21st century is akin to a limited ability to read and write in the past. Imagine trying to learn in the 20th century without being able to read and write? Well, that’s what it’s like trying to learn today if you don’t have a grasp of how digital technologies mediate your construction of knowledge. Integrating technology is not about adding new gadgets or figuring out how to use Facebook groups more effectively.

Technology is an infrastructure that can be used to open up and enhance student’s learning, or to limit it. Freire said that there’s no such thing as a neutral education process, and we take seriously the fact that the technologies we use have a powerful influence on students’ learning.

Q: How do you develop digital and information literacy alongside the competencies that are important for physiotherapists? Doesn’t an emphasis on technology distract students from the core curriculum?

We don’t offer “Technology” as something separate to the physiotherapy curriculum, just as you don’t offer “Pen and paper” as something that is separate. The ability to use a pen and paper used to be an integral and inseparable aspect of learning, and we’ve just moved that paradigm to now include digital and information literacy. Technology isn’t separate to learning, it’s a part of learning just like pen and paper used to be.

Digital and information literacy is integrated into everything that happens at the school. For example, when a new student registers they immediately get allocated a domain on the school servers, along with a personal URL. A digital domain of their own where they get to build out their personal learning environment. This is where they make notes, pull in additional resources like books and video, and work on their projects. It’s a complete online workspace that allows individual and collaborative work and serves as a record of their progress through the programme. It’s really important to us that students learn how to control the digital spaces that they use for learning, and that they’re able to keep control over those spaces after they graduate.

When students graduate, their personal curriculum goes with them, containing the entire curriculum (every resource we shared with them) as well as every artefact of their learning they created, and every resource that they pulled in themselves. Our students never lose the content that they aggregated over the duration of the programme, but more importantly, they never lose the network they built over that time. The learning network is by far the most important part of the programme, and includes not only the content relationships they’ve formed during the process but includes all interactions with their teachers, supervisors, clinicians and tutors.

Q: Why is it important for students to work in digital space, as well as physical space? And how do your choices about online spaces impact on students’ learning?

Think about how the configuration of physical space in a 20th century classroom dictated the nature of interactions that were possible in that space. How did the walls, desks and chairs, and the position of the lecturer determine who spoke, for example? Who moved? Who was allowed to move? How was work done in that space? Think about how concepts of “front” and “back” (in a classroom) have connotations for how we think about who sits where.

Now, how does the configuration of digital space change the nature of the interactions that are possible in that space? How we design the learning environment (digital or physical) not only enables or disables certain kinds of interactions, but it says something about how we think about learning. Choosing one kind of configuration over another articulates a set of values. For example, we value openness in the curriculum, from the licensing of our course materials, to the software we build on. This commitment to openness says something about who we are and what is important to us.

The fact that our students begin here with their own digital space – a personal learning environment – that they can configure in meaningful ways to enhance their potential for learning, sends a powerful message. Just like the physical classroom configuration changes how power is manifested, so can the digital space. Our use of technology tells students that they have power in terms of making choices with respect to their learning.

To go back to your question about the potential for technology to distract students from learning physiotherapy; did you ever think about how classrooms – the physical configuration of space – distracted students from learning? Probably not. Why not?

Q: You mentioned that openness is an important concept in the curriculum. Can you go into a bit more detail about that?

Maybe it would be best to use a specific example because there are many ways that openness can be defined. Our curriculum is an open source project that gives us the ability to be as flexible and adaptable as a 21st century curriculum needs to be. It would be impossible for us to design a curriculum that was configured for every student’s unique learning needs and that was responsive to a changing social context, so we started with a baseline structure that could be modified over time by students.

We use a GitHub repository to host and collaborate on the curriculum. Think of a unique instance of the curriculum that is the baseline version – the core – that is hosted on our servers. When a student registers, we fork that curriculum to create another, unique instance on the students personal digital domain. At this moment, the curriculum on the student’s server is an exact copy of the one we have but almost immediately, the students’ version is modified based on their personal context. For example, the entire curriculum – including all of the content associated with the programme – is translated into the student’s home language if they choose so. Now that it’s on their server, they can modify it to better suit them, using annotation and editing tools, and allowing them to integrate external resources into their learning environment.

One of the most powerful features of the system is that it allows for students to push ideas back into our core curriculum. They make changes on their own versions and if they’d like to see that change implemented across the programme, they send us a “Pull” request, which is basically a message that shows the suggested change along with a comment for why the student wants it. It’s a feedback mechanism for them to send us signals on what works well and what can be improved. It enables us to constantly refine and improve the curriculum based on real time input from students.

On top of this, every time we partner with other institutions, they can fork the curriculum and modify it to suit their context, and then push the changes back upstream. This means that the next time someone wants to partner with us, the core curriculum they can choose from is bigger and more comprehensive. For example, our curriculum is now the largest database of case studies in the world because most institutions that fork the curriculum and make their own changes also send those changes back to the core.

Q: You have a very different approach to a tutorial system. Tell us about how tutors are implemented in your school.

The tutors at altPhysio are weak AI agents – relatively simple artificial general intelligence algorithms that perform within very narrow constraints that are linked to basic tasks associated with student learning. Students “connect” with their AI tutors in the first week of the programme, which for the most part involves downloading an app onto their phones. This is then sync’d across all of their other devices and digital spaces, including laptops, wearables and cloud services, so that the AI is “present” in whatever context the student is learning.

As AI has become increasingly commoditised in the last decade, AI as a service has allowed us to take advantage of features that enhance learning. For example, a student’s tutor will help her with establishing a learning context, finding content related to that context, and reasoning through the problems that arise in the context. In addition, the AIs help students manage time on task, remind them about upcoming tasks and the associated preparation for those tasks, and generally keep them focused on their learning.

Over time the algorithms evolve with students, becoming increasingly tied to them and their own personal learning patterns. While all AI tutors begin with the same structure and function they gradually become more tightly integrated with the student. Some of the more adventurous students have had the AIs integrated with neural lace implants, which has obviously significantly accelerated their ability to function at much higher levels and at much greater speeds than the rest of us. These progressions have obviously made us think very differently about assessment, obviously.

Q: What about technology used during lectures? Is there anything different to what you’ve already mentioned?

Lectures have a different meaning here than at other institutions, and I suspect we’ll talk about that later. Anyway, during lectures the AI tutors act as interpreters for the students, performing real time translation for our international speakers, as well as doing speech-to-text transcription in real time. This means that our deaf students get all speech converted to Braille in real time, which is pretty cool. All the audio, video and text that is generated during lectures is saved, edited and sync’d to the students personal domains where they’re available for recall later.

Our students use augmented reality a lot in the classroom and clinical context, where students overlay digital information on their visual fields in order to get more context in the lecture. For example, while I’m talking about movement happening at the elbow, the student might choose to display the relevant bones, joints and muscles responsible for the movement. As the information is presented to them, they can choose to save that additional detail into the point in the lecture that I discussed it, so that when they’re watching the video of the lecture later, the additional information is included. We use this system a lot for anatomy and other movement- and structure-type classes.

microsoft-hololens-medical-studies

Q: That sounds like a pretty comprehensive overview of how technology has some important uses beyond making content easier to access. Any final thoughts?

Technology is not something that we “do”, it’s something that we “do things with”. It enables more powerful forms of communication and interaction, both in online and physical spaces, and to think of it in terms of another “platform” or “service” is to miss the point. It amplifies our ability to do things in the world and just because it’s not cheap or widely distributed today doesn’t mean it won’t be in the future.

In 2007 the iPhone didn’t exist. Now every student in the university carries in their pocket a computer more powerful than the ones we used to put men on the moon. We should be more intentional about how we use that power, and forget about whatever app happens to be trending today.

 

I enjoyed reading (March)

screen-free-ideas-for-kids4

The web as a universal standard (Tony Bates): It wasn’t so much the content of this post that triggered my thinking, but the title. I’ve been wondering for a while what a “future-proof” knowledge management database would look like. While I think the most powerful ones will be semantic (e.g. like the KDE desktop integrated with the semantic web), there will also be a place for standardised, text-based media like HTML.

 

The half-life of facts (Maria Popova):

Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.

 

How parents normalised password sharing (danah boyd):

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

 

Academic research published as a graphic novel (Gareth Morris): Over the past few months I’ve been thinking about different ways for me to share the results of my PhD (other than the papers and conference presentations that were part of the process). I love the idea of using stories to share ideas, but had never thought about presenting research in the form of a graphic novel.

product_thumbnail

 

Getting rich off of schoolchildren (David Sirota):

You know how it goes: The pervasive media mythology tells us that the fight over the schoolhouse is supposedly a battle between greedy self-interested teachers who don’t care about children and benevolent billionaire “reformers” whose political activism is solely focused on the welfare of kids. Epitomizing the media narrative, the Wall Street Journal casts the latter in sanitized terms, reimagining the billionaires as philanthropic altruists “pushing for big changes they say will improve public schools.”

The first reason to scoff at this mythology should be obvious: It simply strains credulity to insist that pedagogues who get paid middling wages but nonetheless devote their lives to educating kids care less about those kids than do the Wall Street hedge funders and billionaire CEOs who finance the so-called reform movement. Indeed, to state that pervasive assumption out loud is to reveal how utterly idiotic it really is, and yet it is baked into almost all of today’s coverage of education politics.

 

The case for user agent extremism (Anil Dash): Anil’s post has some close parallels with this speech by Eben Moglen, that I linked to last month. The idea that, as technology becomes increasingly integrated into our lives, the more control we are losing. We all need to become invested in wresting control of our digital lives and identities back from corporations, although how exactly to do that is a difficult problem.

The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations.

 

Singularities and nightmares (David Brin):

Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency — establishing robust systems that can deal with almost any problem as it arises.

 

Is AI near a takeoff point? (J. Storrs Hall):

Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program — with guns) will have to change.