In Beta and sunsetting consumer Google+

Action 1: We are shutting down Google+ for consumers.

This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.

I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.

When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.

 

Mozilla’s Common Voice project

Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

Source: Branson, M. (2018). We’re intentionally designing open experiences, here’s why.

One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.

I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.

Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!

The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?

Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

With every answer, search reshapes our worldview

Our search engines tried to impose structure and find relationships using mainly unintentional clues. You therefore couldn’t rely on them to find everything that would be of help, and not because the information space was too large. Rather, it was because the space was created by us slovenly humans.

Source: Weinberger, D. (2017). With every answer, search reshapes our worldview.

Interesting article on how search algorithms have changed as the web has grown in scale. In the beginning, we got results that were determined by precision and recall  (although optimising for one meant reducing the importance of the other). Then relevance became necessary to include as the number of possible results became too large i.e. when you have 100 000 articles that match the topic, the search engine must decide how to rank them for you. Over time, interestingness was another concept that was built into the algorithm; it’s not just that the results should be accurate and relevant, but they should be interesting too.

Currently, there’s interest in serendipity, where search engines return results that are slightly different to what you’re looking for and may serve to provide an alternative point of view (but not so different that you ignore it) and so avoid the filter bubble. As we move forward, we may also begin seeing calls for an increase in the truthfulness of results (which may reasonably be called quality). As I said, it’s an interesting article that covers a lot with respect to how search engines work, and it useful for anyone who has ever told someone to “just Google it”.

The future is ear: Why “hearables” are finally tech’s next big thing

Your ears have some enormously valuable properties. They are located just inches from your mouth, so they can understand your utterances far better than smart speakers across the room. Unlike your eyes, your ears are at work even when you are asleep, and they are our ultimate multi-taskers. Thousands die every year trying to text while they drive, but most people have no problem driving safely while talking or dictating messages–even if music is playing and children are chatting in the background.

Source: Burrows, P. (2018). The future is ear: Why “hearables are finally tech’s next big thing.

Audio is going to be the next important user interface for human-computer interaction. You could argue that it already is (see Google Home and Assistant, Alexa, Siri, and Cortana). If you think of it as a bandwidth problem you can see that we can take in so much more information by listening, compared to reading. And, unlike reading, listening frees us up to do other things at the same time.

The Industrial Era Ended, and So Will the Digital Era

While there is limited new value to be gleaned from things like word processors and smartphone apps, there is tremendous value to be unlocked in applying digital technology to fields like genomics and materials science to power traditional industries like manufacturing, energy, and medicine. Essentially, the challenge ahead is to learn how to use bits to drive atoms.

Source: The Industrial Era Ended, and So Will the Digital Era

We stop calling things “technology” when we don’t see them anymore. This is why we don’t really refer to writing, hammers and dish washers as technology, even though they obviously are. I agree that digital technology will soon go the way of kitchen appliances in that we will stop referring to it as if it were anything special. It will simply be the thing that drives everything else.

OpenPhysio abstract: Artificial intelligence in clinical practice – Implications for physiotherapy education

Here is the abstract of a paper I recently submitted to OpenPhysio, a new open-access journal with an emphasis on physiotherapy education.

About 200 years ago the invention of the steam engine ushered in an era of unprecedented development and growth in human social and economic systems, whereby human labour was supplanted by machines. The recent emergence of artificially intelligent machines has seen human cognitive capacity augmented by computational agents that are able to recognise previously hidden patterns within massive data sets. The characteristics of this second machine age are already influencing all aspects of society, creating the conditions for disruption to our social, economic, education, health, legal and moral systems, and which will likely to have a far greater impact on human progress than did the steam engine. As AI-based technology becomes increasingly embedded within devices, people and systems, the fundamental nature of clinical practice will evolve, resulting in a healthcare system requiring profound changes to physiotherapy education. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and the newly-constituted care teams that will emerge. This paper describes some of the possible influences of AI-based technologies on physiotherapy practice, and the subsequent ways in which physiotherapy education will need to change in order to graduate professionals who are fit for practice in a 21st century health system.

Read the full paper at OpenPhysio (note that this article is still under review).

Is that it? More, better apps?

It seems that much of the literature on the use of technology in education focuses on apps (Instagram, WhatsApp), services and platforms (Google Docs, Facebook) and hardware (tablets, laptops and phones). This is fine, of course. We need to understand how students and teachers use these things in the classroom. But is this really what we mean when we talk about innovation in the classroom?

Consider the changes wrought in society and industry between 1900 and 1970 as a result of the invention and implementation of technologies related to the electrification of cities, national road and railway networks, sanitation, pharmaceuticals, the internal combustion engine, and mass communication (Gordon, 2017). These were the kinds of innovations that changed the lives of hundreds of millions of people in truly significant ways because they changed the physical structures around us. They changed the configuration of space, which determines the kinds of activities that are available in that space. But what counts as innovation today? More, better apps. I came across this quote attributed to Elon Musk (although I can’t find a good source to confirm): “Cellphones distract us from the fact that the subways are old”.

When we look at infrastructure we start to get a sense of what innovation really looks like, as well as the amount of effort it would take to change it in innovative ways. For example, deciding that cities and towns should have green spaces set aside for its citizens is by no means intuitive or inevitable. Town planners could just as easily have decided that that real estate could better serve commercial interests. And how would you go about changing those green spaces, maybe by installing safer playground equipment or rerouting a running track? The point is that infrastructure is old and because it’s old it naturally forms the baseline upon which other things are built. No parks in the city means no green space to enjoy being outdoors and if you want green space you’re going to have to do an enormous amount of work to get it. You can’t just build a new app.

We’re spending a lot of time looking at technology that may improve some superficial aspects of pedagogical work but we spend very little time on anything that would fundamentally change the underlying infrastructure. Maybe this is because we don’t even see the infrastructure anymore? It’s easier to focus on the superficial stuff that everyone can see. For example, there are 4650 studies looking at the use of Snapchat in the classroom at the time of writing but relatively few that question why we’re still in a classroom. With the desks screwed to the floor. Changing infrastructure is the hard work that no-one wants to do but it’s also the important work because that’s what everyone else builds on. We’ve been distracted into thinking that we’re innovating when we’re really just painting over the cracks in the walls.

Would we even recognise innovation in higher education, or would we disreguard it because it doesn’t fit the mental model of what we think it should look like? Maybe we could use this idea as an indicator of innovative work: If we recognise it, it’s probably not innovative. That’s not to say that we should break everything and innovate for it’s own sake. But let’s be clear about what innovation really means. It’s not the consumption of content in new formats. It’s not the use of laptops and tablets instead of books. It’s not the use of Twitter to share resources. These may be good, useful iterations of our practice but they’re not going to change the infrastructure of learning.

In five years time Snapchat will be gone and there’ll be a new #educationapp trending on Twitter, but the desks will still be screwed to the floor.

Critical digital pedagogy in the classroom: Practical implementation

Update (12-02-18): You can now download the full chapter here (A critical pedagogy for online learning in physiotherapy education) and the edited collection here.

This post is inspired by the work I’ve recently done for a book chapter, as well as several articles on Hybrid Pedagogy but in particular, Adam Heidebrink-Bruno’s Syllabus as Manifesto. I’ve been wanting to make some changes to my Professional Ethics module for a while and the past few weeks have really given me a lot to think about. Critical pedagogy is an approach to teaching and learning that not only puts the student at the centre of the classroom but then helps them to figure out what to do now that they’re there. It also pushes teachers to go beyond the default configurations of classroom spaces. Critical digital pedagogy is when we use technology to do things that are difficult or impossible in those spaces without it.

One of the first things we do in each module we teach is provide students with a course overview, or syllabus. We don’t even think about it but this document might be the first bit of insight into how we define the space we’re going to occupy with our students. How much thought do we really give to the language and structure of the document? How much of it is informed by the students’ voice? I wondered what my own syllabus would look like if I took to heart Jesse Stommel’s suggestion that we “begin by trusting students”.

I wanted to find out more about where my students come from, so I created a shared Google Doc with a very basic outline of what information needed to be included in a syllabus. I asked them to begin by anonymously sharing something about themselves that they hadn’t shared with anyone else in the class before. Something that influenced who they are and how they came to be in that class. I took what they shared, edited it and created the Preamble to our course outline, describing our group and our context. I also added my own background to the document, sharing my own values, beliefs and background, as well as positioning myself and my biases up front. I wanted to let them know that, as I ask them to share something of themselves, so will I do the same.

The next thing were the learning outcomes for the modules. We say that we want our students to take responsibility for their learning but we set up the entire programme without any input from them. We decide what they will learn based on the outcomes we define, as well as how it will be assessed. So for this syllabus I included the outcomes that we have to have and then I asked the students to each define what “success” looks like in this module for them. Each student described what they wanted to achieve by the end of the year, wrote it as a learning outcome, decided on the indicators of progress they needed, and then set timelines for completion. So each of them would have the learning outcomes that the institution and professional body requires, plus one. I think that this goes some way toward acknowledging the unique context of each student, and also gives them skills in evaluating their own development towards goals that they set that are personally meaningful.

I’ve also decided that the students will decide their own marks for these personal outcomes. At the end of the year they will evaluate their progress against the performance indicators that they have defined, and give themselves a grade that will count 10% towards their Continuous Assessment mark. This decision was inspired by this post on contract grading from HASTAC. What I’m doing isn’t exactly the same thing but it’s a similar concept in that students not only define what is important to them, but decide on the grade they earn. I’m not 100% how this will work in practice, but I’m leaning towards a shared document where students will do peer review on each other’s outcomes and progress. I’m interested to see what a student-led, student-graded, student-taught learning outcome looks like.

Something that is usually pretty concrete in any course is the content. But many concepts can actually be taught in a wide variety of ways and we just choose the ones that we’re most familiar with. For example the concept of justice (fairness) could be discussed using a history of the profession, resource allocation for patients, Apartheid in South Africa, public and private health systems, and so on. In the same shared document I asked students to suggest topics they’d like to cover in the module. I asked them to suggest the things that interest them, and I’d figure out how to teach concepts from professional ethics in those contexts. This is what they added: Income inequality. Segregation. #FeesMustFall. Can ethics be taught? The death penalty. Institutional racism. Losing a patient. That’s a pretty good range of topics that will enable me to cover quite a bit of the work in the module. It’s also more likely that students will engage considering that these are the things they’ve identified as being interesting.

Another area that we have completely covered as teachers is assessment. We decide what will be assessed, when the assessment happens, how it is graded, what formats we’ll accept…we even go so far as to tell students where to put the full stops and commas in their referencing lists. That’s a pretty deep level of control we’re exerting. I’ve been using a portfolio for assessment in this module for a few years so I’m at a point where I’m comfortable with students submitting a variety of different pieces. What I’m doing differently this year is asking the students to submit each task when it’s ready rather than for some arbitrary deadline. They get to choose when it suits them to do the work, but I have asked them to be reasonable with this, mainly because if I’m going to give them decent feedback I need time before their next piece arrives. If they’re submitted all at once, there’s no time to use the feedback to improve their next submission.

The students then decided what our “rules of engagement” would be in the classroom. Our module guides usually have some kind of prescription about what behaviour is expected, so I asked the students what they thought appropriate behaviour looks like and then to commit as a class to those rules. Unsurprisingly, their suggestions looked a lot like it would have if I had written it myself. Then I asked them to decide how to address situations when individuals contravened our rules. I don’t want to be the policeman who has to discipline students…what would it look like if students decided in advance what would work in their classroom, and then took action when necessary? I’m pretty excited to find out.

I decided that there would be no notes provided for this module, and no textbook either. I prepare the lecture outline in a shared Google document, including whatever writing assignments the students need to work on and links to open access resources that are relevant for the topic. The students take notes collaboratively in the document, which I review afterwards. I add comments and structure to their notes, and point them to additional resources. Together, we will come up with something unique describing our time together. Even if the topic is static our conversations never are, so any set of notes that focuses only on the topic is going to necessarily leave out the sometimes wonderful discussion that happens in class. This way, the students get the main ideas that are covered, but we also capture the conversation, which I can supplement afterwards.

Finally, I’ve set up a module evaluation form that is open for comment immediately and committed to having it stay open for the duration of the year. The problem with module evaluations is that we ask students to complete them at the end of the year, when they’re finished and have no opportunity to benefit from their suggestions. I wouldn’t fill it in either. This way, students get to evaluate me and the module at any time, and I get feedback that I can act on immediately. I use a simple Google Form that they can access quickly and easily, with a couple of rating scales and an option to add an open-ended comment. I’m hoping that this ongoing evaluation option in a format that is convenient for students means that they will make use of it to improve our time together.

As we worked through the document I could see students really struggling with the idea that they were being asked to contribute to the structure of the module. Even as they commented on each other’s suggestions for the module, there was an uncertainty there. It took a while for them to be comfortable saying what they wanted. Not just contributing with their physical presence in the classroom, but to really contribute in designing the module; how it would be run, how they would be assessed, how they could “be” in the classroom. I’m not sure how this is going to work out but I felt a level of enthusiasm and energy that I haven’t felt before. I felt a glimmer of something real as they started to take seriously my offer to take them seriously.

The choices above demonstrate a few very powerful additions to the other ways that we integrate technology into this module (the students portfolios are all on the IEP blog, they do collaborative authoring and peer review in Google Drive, course resources are shared in Drive, they do digital stories for one of the portfolio submissions, and occasionally we use Twitter for sharing interesting stories). It makes it very clear to the students that this is their classroom and their learning experiences. I’m a facilitator but they get to make real choices that have a real impact in the world. They get to understand and get a sense of what it feels like to have power and authority, as well as the responsibility that comes with that.