Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

‘The discourse is unhinged’: how the media gets AI alarmingly wrong

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”. According to Lipton, in recent years broader interest in topics like “machine learning” and “deep learning” has led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic”.

Source: Schwartz, O. (2018). ‘The discourse is unhinged’: how the media gets AI alarmingly wrong.

There’s a lot of confusion around what we think of as AI. For most people who are actually working in the field, the current state of AI and machine learning research present their findings as the solution to very narrowly constrained problems that are the result of the statistical manipulation of large data sets expressed within certain confidence intervals. There’s no talk of consciousness, choice, or values of any kind. To be clear, this is “intelligence” as defined within very specific parameters. It’s important that clinicians and educators (and everyone else, actually) at least understand at a basic level what we mean when we say “artificial intelligence”.

Of course, there are also people working on issues of artificial general intelligence and superintelligence, which is different to the narrow (or weak) intelligence that is being reported when we see today’s sensationalist headlines.

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.

OpenPhysio abstract: Artificial intelligence in clinical practice – Implications for physiotherapy education

Here is the abstract of a paper I recently submitted to OpenPhysio, a new open-access journal with an emphasis on physiotherapy education.

About 200 years ago the invention of the steam engine ushered in an era of unprecedented development and growth in human social and economic systems, whereby human labour was supplanted by machines. The recent emergence of artificially intelligent machines has seen human cognitive capacity augmented by computational agents that are able to recognise previously hidden patterns within massive data sets. The characteristics of this second machine age are already influencing all aspects of society, creating the conditions for disruption to our social, economic, education, health, legal and moral systems, and which will likely to have a far greater impact on human progress than did the steam engine. As AI-based technology becomes increasingly embedded within devices, people and systems, the fundamental nature of clinical practice will evolve, resulting in a healthcare system requiring profound changes to physiotherapy education. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and the newly-constituted care teams that will emerge. This paper describes some of the possible influences of AI-based technologies on physiotherapy practice, and the subsequent ways in which physiotherapy education will need to change in order to graduate professionals who are fit for practice in a 21st century health system.

Read the full paper at OpenPhysio (note that this article is still under review).

altPhysio | Technology as infrastructure

This is the fourth post in my altPhysio series, where I’m exploring alternative ways of thinking about a physiotherapy curriculum by imagining what a future school might look like. This post is a bit longer than other because this is an area I’m really interested in and spend a lot of time thinking about. I’ve also added more links to external sources because some of this stuff sounds like science fiction. The irony is that everything in this post describes technology that currently exists, and as long as we’re thinking about whether or not to share PowerPoint slide we’re not paying attention to what’s important. This post was a ton of fun to write.

Q: Can you talk a little bit about the history of technology integration in health professions education? Maybe over the last decade or so.

In the early part of the 21st century we saw more institutions starting to take the integration of technology seriously. Unfortunately the primary use of digital services at the time was about moving content around more efficiently. Even though the research was saying that the content component was less important for learning than the communication component, we still saw universities using the LMS primarily to share notes and presentations with students.

The other thing is that we were always about 5-10 years behind the curve when it came to the adoption of technology. For examples, wikis started showing up in the medical education literature almost 10 years after they were invented. The same with MOOCs. I understand the need to wait and see how technologies stabilise and then choosing something that’s robust and reliable. But the challenge is that you lose out on the early mover advantages of using the technology early. That’s why we tend to adopt a startup mentality to how we use technology at altPhysio.

Q: What do you mean by that? How is altPhysio like a startup?

We pay attention to what’s on the horizon, especially the emerging technologies that have the potential to make an impact on learning in 1, 2 and 5 year time frames. We decided that we weren’t going to wait and see what technologies stabilised and would rather integrate the most advanced technologies available at the time. We designed our programme to be flexible and to adapt to change based on what’s happening around us. When the future is unknowable because technological advances are happening faster than you can anticipate, you need a system that can adapt to the situations that emerge. We can’t design a rigid curriculum that attempts to guess what the future holds. So we implement and evaluate rapidly, constantly trying out small experiments with small groups of students.

Once we decided that we’d be proactive instead of reactive in how we use and think about technology, we realised that we’d need a small team in the school who are on the lookout for technologies that have the potential to enhance the curriculum. The team consists of students and staff who identify emerging technologies before they become mainstream, prepare short reports for the rest of the school, recruit beta testers and plan small scale research projects that highlight the potential benefits and challenges of implementing the technology at scale.

We’ve found that this is a great way for students to invest themselves in their own learning, drive research in areas they are interested in, take leadership roles and manage small projects. Staff on the team act as supervisors and mentors, but in fact are often students themselves, as both groups push each other further in terms of developing insights that would not be possible working in isolation.

Q: But why the emphasis on technology in health professions education? Isn’t this programme about developing physiotherapists?

The WHO report on the use of elearning for undergraduate health professional education called for the integration of technology into the curriculum, as did the Lancet Commission report. And it wasn’t just about moving content more efficiently in the system but rather to use technology intentionally to change how we think about the curriculum and student learning. The ability to learn is increasingly mediated by digital and information literacy and we want our students’ learning potential to be maximised.

Low levels of digital literacy in the 21st century is akin to a limited ability to read and write in the past. Imagine trying to learn in the 20th century without being able to read and write? Well, that’s what it’s like trying to learn today if you don’t have a grasp of how digital technologies mediate your construction of knowledge. Integrating technology is not about adding new gadgets or figuring out how to use Facebook groups more effectively.

Technology is an infrastructure that can be used to open up and enhance student’s learning, or to limit it. Freire said that there’s no such thing as a neutral education process, and we take seriously the fact that the technologies we use have a powerful influence on students’ learning.

Q: How do you develop digital and information literacy alongside the competencies that are important for physiotherapists? Doesn’t an emphasis on technology distract students from the core curriculum?

We don’t offer “Technology” as something separate to the physiotherapy curriculum, just as you don’t offer “Pen and paper” as something that is separate. The ability to use a pen and paper used to be an integral and inseparable aspect of learning, and we’ve just moved that paradigm to now include digital and information literacy. Technology isn’t separate to learning, it’s a part of learning just like pen and paper used to be.

Digital and information literacy is integrated into everything that happens at the school. For example, when a new student registers they immediately get allocated a domain on the school servers, along with a personal URL. A digital domain of their own where they get to build out their personal learning environment. This is where they make notes, pull in additional resources like books and video, and work on their projects. It’s a complete online workspace that allows individual and collaborative work and serves as a record of their progress through the programme. It’s really important to us that students learn how to control the digital spaces that they use for learning, and that they’re able to keep control over those spaces after they graduate.

When students graduate, their personal curriculum goes with them, containing the entire curriculum (every resource we shared with them) as well as every artefact of their learning they created, and every resource that they pulled in themselves. Our students never lose the content that they aggregated over the duration of the programme, but more importantly, they never lose the network they built over that time. The learning network is by far the most important part of the programme, and includes not only the content relationships they’ve formed during the process but includes all interactions with their teachers, supervisors, clinicians and tutors.

Q: Why is it important for students to work in digital space, as well as physical space? And how do your choices about online spaces impact on students’ learning?

Think about how the configuration of physical space in a 20th century classroom dictated the nature of interactions that were possible in that space. How did the walls, desks and chairs, and the position of the lecturer determine who spoke, for example? Who moved? Who was allowed to move? How was work done in that space? Think about how concepts of “front” and “back” (in a classroom) have connotations for how we think about who sits where.

Now, how does the configuration of digital space change the nature of the interactions that are possible in that space? How we design the learning environment (digital or physical) not only enables or disables certain kinds of interactions, but it says something about how we think about learning. Choosing one kind of configuration over another articulates a set of values. For example, we value openness in the curriculum, from the licensing of our course materials, to the software we build on. This commitment to openness says something about who we are and what is important to us.

The fact that our students begin here with their own digital space – a personal learning environment – that they can configure in meaningful ways to enhance their potential for learning, sends a powerful message. Just like the physical classroom configuration changes how power is manifested, so can the digital space. Our use of technology tells students that they have power in terms of making choices with respect to their learning.

To go back to your question about the potential for technology to distract students from learning physiotherapy; did you ever think about how classrooms – the physical configuration of space – distracted students from learning? Probably not. Why not?

Q: You mentioned that openness is an important concept in the curriculum. Can you go into a bit more detail about that?

Maybe it would be best to use a specific example because there are many ways that openness can be defined. Our curriculum is an open source project that gives us the ability to be as flexible and adaptable as a 21st century curriculum needs to be. It would be impossible for us to design a curriculum that was configured for every student’s unique learning needs and that was responsive to a changing social context, so we started with a baseline structure that could be modified over time by students.

We use a GitHub repository to host and collaborate on the curriculum. Think of a unique instance of the curriculum that is the baseline version – the core – that is hosted on our servers. When a student registers, we fork that curriculum to create another, unique instance on the students personal digital domain. At this moment, the curriculum on the student’s server is an exact copy of the one we have but almost immediately, the students’ version is modified based on their personal context. For example, the entire curriculum – including all of the content associated with the programme – is translated into the student’s home language if they choose so. Now that it’s on their server, they can modify it to better suit them, using annotation and editing tools, and allowing them to integrate external resources into their learning environment.

One of the most powerful features of the system is that it allows for students to push ideas back into our core curriculum. They make changes on their own versions and if they’d like to see that change implemented across the programme, they send us a “Pull” request, which is basically a message that shows the suggested change along with a comment for why the student wants it. It’s a feedback mechanism for them to send us signals on what works well and what can be improved. It enables us to constantly refine and improve the curriculum based on real time input from students.

On top of this, every time we partner with other institutions, they can fork the curriculum and modify it to suit their context, and then push the changes back upstream. This means that the next time someone wants to partner with us, the core curriculum they can choose from is bigger and more comprehensive. For example, our curriculum is now the largest database of case studies in the world because most institutions that fork the curriculum and make their own changes also send those changes back to the core.

Q: You have a very different approach to a tutorial system. Tell us about how tutors are implemented in your school.

The tutors at altPhysio are weak AI agents – relatively simple artificial general intelligence algorithms that perform within very narrow constraints that are linked to basic tasks associated with student learning. Students “connect” with their AI tutors in the first week of the programme, which for the most part involves downloading an app onto their phones. This is then sync’d across all of their other devices and digital spaces, including laptops, wearables and cloud services, so that the AI is “present” in whatever context the student is learning.

As AI has become increasingly commoditised in the last decade, AI as a service has allowed us to take advantage of features that enhance learning. For example, a student’s tutor will help her with establishing a learning context, finding content related to that context, and reasoning through the problems that arise in the context. In addition, the AIs help students manage time on task, remind them about upcoming tasks and the associated preparation for those tasks, and generally keep them focused on their learning.

Over time the algorithms evolve with students, becoming increasingly tied to them and their own personal learning patterns. While all AI tutors begin with the same structure and function they gradually become more tightly integrated with the student. Some of the more adventurous students have had the AIs integrated with neural lace implants, which has obviously significantly accelerated their ability to function at much higher levels and at much greater speeds than the rest of us. These progressions have obviously made us think very differently about assessment, obviously.

Q: What about technology used during lectures? Is there anything different to what you’ve already mentioned?

Lectures have a different meaning here than at other institutions, and I suspect we’ll talk about that later. Anyway, during lectures the AI tutors act as interpreters for the students, performing real time translation for our international speakers, as well as doing speech-to-text transcription in real time. This means that our deaf students get all speech converted to Braille in real time, which is pretty cool. All the audio, video and text that is generated during lectures is saved, edited and sync’d to the students personal domains where they’re available for recall later.

Our students use augmented reality a lot in the classroom and clinical context, where students overlay digital information on their visual fields in order to get more context in the lecture. For example, while I’m talking about movement happening at the elbow, the student might choose to display the relevant bones, joints and muscles responsible for the movement. As the information is presented to them, they can choose to save that additional detail into the point in the lecture that I discussed it, so that when they’re watching the video of the lecture later, the additional information is included. We use this system a lot for anatomy and other movement- and structure-type classes.

microsoft-hololens-medical-studies

Q: That sounds like a pretty comprehensive overview of how technology has some important uses beyond making content easier to access. Any final thoughts?

Technology is not something that we “do”, it’s something that we “do things with”. It enables more powerful forms of communication and interaction, both in online and physical spaces, and to think of it in terms of another “platform” or “service” is to miss the point. It amplifies our ability to do things in the world and just because it’s not cheap or widely distributed today doesn’t mean it won’t be in the future.

In 2007 the iPhone didn’t exist. Now every student in the university carries in their pocket a computer more powerful than the ones we used to put men on the moon. We should be more intentional about how we use that power, and forget about whatever app happens to be trending today.

 

Technology will make lecturers redundant – but only if they let it

Technology will make lecturers redundant — but only if they let it

This article was originally published on The Conversation. Read the original article.

A teacher walks into a classroom and begins a lesson. As she speaks, the audio is translated in real time into a variety of languages that students have pre-selected, so each can hear the lecturer’s voice in their own language. It can even be delivered directly into their auditory canal so that it does not disturb other students. The lecturer’s voice is also transcribed in real-time, appearing in a display that presents digital content over the students’ visual field.

As the lesson progresses, students identify concepts they feel need further clarification. They submit highly individual queries to search engines that use artificial intelligence algorithms to filter and synthesise results from a variety of sources. This information is presented in their augmented reality system, along with the sources used, and additional detail in the form of images and animations.

microsoft-hololens-medical-studies

All of the additional information gathered by students is collated into a single set of notes for the lesson, along with video and audio recordings of the interactions. It’s then published to the class server.

This isn’t science fiction. All of the technology described here currently exists. Over time it will become more automated, economical and accurate.

What does a scenario like the one described here mean for lecturers who think that “teaching” means selecting and packaging information for students? There are many excellent theoretical reasons for why simply covering the content or “getting through the syllabus” has no place in higher education. But for the purposes of this article I’ll focus on the powerful practical reasons that lecturers who merely cover the content are on a guaranteed path to redundancy.

The future isn’t coming – it’s here

The technology described above may sound outlandish and seem totally out of most students’ reach. But consider the humble – and ubiquitous – smartphone. A decade ago, the iPhone didn’t exist. Five years ago most students in my classes at a South African university didn’t have smartphones. Today, most do. Research shows that this growth is mirrored across Africa. The first cellphones were prohibitively expensive, but now smartphones and tablets are handed out to people opening a bank account. The technology on these phones is also becoming increasingly powerful, and will continue to advance so that what is cutting edge today will be mainstream in about five years’ time.

This educational technology can change the way that university students learn. But ultimately, machines can’t replace teachers. Unless, that is, teachers are just selecting and packaging content with a view to “getting through the syllabus”. As demonstrated above, computers and algorithms are becoming increasingly adept at the filtering and synthesis of specialised information. Teachers who focus on the real role of universities – teaching students how to think deeply and critically – and who have an open mind, needn’t fear this technology.

Crucial role of universities

In a society where machines are taking over more and more of our decision-making, we must acknowledge that the value of a university is not the academics who see their work as controlling access to specialised knowledge.

Rather, it’s that higher education institutions constitute spaces that encourage in-depth investigation into the nature of the world. The best university teachers don’t just focus on content because doing so would reduce their roles to information filters who simply make decisions about what content is important to cover.

Digital tools are quickly getting to the point where algorithms will outperform experts, not only in filtering content but also in synthesising it. Teachers should embrace technology by encouraging their students to build knowledge through digital networks both within and outside the academy. That way they will never become redundant. And they’ll ensure that their graduates are critical thinkers, not just technological gurus.The Conversation

Physiotherapy in 2050: Ethical and clinical implications

This post describes a project that I began earlier this week with my 3rd year undergraduate students as part of their Professional Ethics module. The project represents a convergence of a few ideas that have been bouncing around in my head for a couple of years and are now coming together as a result of a proposal that I’m putting together for a book chapter for the Critical Physiotherapy Network. I’m undecided at this point if I’ll develop it into a full research proposal, as I’m currently feeling more inclined to just have fun with it rather than turn it into something that will feel more like work.

The project is premised on the idea that health and medicine – embedded within a broader social construct – will be significantly impacted by rapidly accelerating changes in technology. The question we are looking to explore in the project is: What are the moral, ethical, legal, and clinical implications for physiotherapy practice when the boundaries of medical and health science are significantly shifted as a result of technological advances?

The students will work in small groups that are allocated an area of medicine and health where we are seeing significant change as a result of the integration of advanced technology. Each week in class I will present an idea that is relevant to our Professional Ethics module (for example, the concept of human rights) and then each group will explore that concept within the framework of their topic. So, some might look at how gene therapy could influence how we think about our rights, while others might ask what it even means to be human. I’m not 100% how this is going to play out and will most likely adapt the project as we progress, taking into account student feedback and the challenges we encounter. I can foresee some groups having trouble with certain ethical constructs simply because it may not be applicable to their topic.

Exoskeletons are playing an increasingly important role in neurological rehabilitation.
Exoskeletons playing an increasingly important role in neurological rehabilitation.
The following list and questions aim to stimulate the discussion and to give some idea of what we are looking at (this list is not exhaustive and I’m still playing around with ideas – suggestions are welcome):

  1. Artificial intelligence and algorithmic ethical decision-making. Can computers be ethical? How is ethical reasoning incorporated into machines? How will ethical algorithms impact health, for example, when computers make decisions about organ transplant recipients? Can ethics programmed into machines?
  2. Nanotechnology. As our ability to manipulate our world at the atomic level advances, what changes can we expect to see for physiotherapists and physiotherapy practice? How far can we go with integrating technology into our bodies before we stop being “human”?
  3. Gene therapy. What happens when genetic disorders that provide specialisation areas for physiotherapists are eradicated through gene therapy? What happens when we can “fix” the genetic problems that lead to complications that physiotherapists have traditionally had a significant role in. For example, what will we do when cystic fibrosis is cured? What happens when we have a vaccine for HIV? Or when ALS is little more than an inconvenience?
  4. Robotics. What happens when patients who undergo amputations are fitted with prosthetics that link to the nervous system? When exoskeletons for paralysed patients are common? How much of robotic systems will students need to know about? Will exoskeletons be the new wheelchairs?
  5. Aging. What happens when the aging population no longer ages? How will physiotherapy change as the human lifespan is extended? There is an entire field of physiotherapy devoted to the management of the aging population; what will happen to that? How will palliative care change?
  6. Augmented reality. When we can overlay digital information onto our visual field, what possibilities exist for effective patient management? For education? What happens when that information is integrated with location-based data, so that patient-specific information is presented to us when we are near that patient?
  7. Virtual reality. What will it mean for training when we can build entire hospitals and patient interactions in the virtual world? When we can introduce students to the ICU in their first year? This could be especially useful when we have challenges with finding enough placements for students who need to do clinical rotations.
  8. 3D printing. What happens when we can print any equipment that we need, that is made exactly to the patient’s specifications? How will this affect the cost of equipment distribution to patients? Can 3D printed crutches be recycled? Reused by other patients? What new kinds of equipment can be invented when we are not constrained by the production lines of the companies who traditionally make the tools we use?
  9. Brain-computer interfaces. When patients are able to control computers (and by extension, everything linked to the computer) simply by thinking about it, what does that mean for their roles in the world? What does it mean when someone with a C7 complete spinal cord injury can still be a productive member of society? What does it mean for community re-integration? How will “rehabilitation” change if computer science is a requirement to even understand the tools our patients use?
  10. Quantified self. As we begin to use sensors close to our bodies (inside our phones, watches, etc.) and soon – inside our bodies – we will have access to an unprecedented amount of personal (very personal) data about ourselves. We will be able to use that data to inform decision making about our health and well-being, which will change the patient-therapist relationship. This will most likely have the effect of modifying the power differential between patients and clinicians. How will we deal with that? Are we training students to know what to do with that patient information? To understand how these sensors work?
  11. Processing power. While this is actually something that is linked to every other item in the list, it might warrant it’s own topic purely because everything else depends on the continuous improvements in processing power and parallel reduction in cost.
  12. The internet. I’m not sure about this. While the architecture of the internet itself is unlikely to change much in the next few decades (disregarding the idea that the internet as we know it might be supplanted with something better), who has access to it and how we use it will most certainly change.

An artist's depiction of a nanobot that is smaller than blood cells.
Nanobot smaller than blood cells.
I should state that we will be working under certain assumptions:

  • That the technology will not be uniformly integrated into society and health systems i.e. that wealth disparity or income inequality will directly affect implementation of certain therapies. This will,obviously have ethical and moral implications.
  • That the technology will not be freely available i.e. that corporations will license certain genetic therapies and withhold their use on those who cannot pay the license.
  • That technological progression will continue over time i.e. that regulations will not prevent, for example, further research into stem cell therapy.
  • …we may have to make additional assumptions as we move forward but this is all I can think of now

We’ll probably find that there will be significant overlap in the above topics, since some are specific technologies that will have an influence on other areas. For example, gene therapy and nanotechnology may have an impact on aging; artificial intelligence will impact many areas, as will robotics and computing power. The idea isn’t that these topics are discrete and separate, but that they provide a focus point for discussion and exploration, with the understanding that overlap is inevitable. In fact, overlap is preferable, since it will help us explore relationships between the different areas and to find connections that we maybe were not previously aware of.

Giving patients bad news in virtual spaces where we can control the interaction.
Giving patients bad news in virtual spaces where we can control the interaction.
The activities that the students engage in during this project are informed by the following ideas, which overlap with each other:

  • Authentic learning is a framework for designing learning tasks that lead to deeper engagement by students. Authentic tasks should be complex, collaborative, ill-defined, and completed over long periods.
  • Inquiry-based learning suggests that students should identify challenging questions that are aimed at addressing gaps in their understanding of complex problems. The research that they conduct is a process they go through in order to achieve outcomes, rather than being an end in itself.
  • Project-based learning is the idea that we can use full projects – based in the real world – to discuss and explore the disciplinary content, while simultaneously developing important skills that are necessary for learning in the 21st century.

I should be clear that I’m not really sure what the outcome of this project will be. I obviously have objectives for my students’ learning that relate to the Professional Ethics module but in terms of what we cover, how we cover it, what the final “product” is…these are all still quite fluid. I suppose that, ideally, I would like for us as a group (myself and the students) to explore the various concepts together and to come up with a set of suggestions that might help to guide physiotherapy education (or at least, physiotherapy education as practiced by me) over the next 5-10 years.

Augmented reality has significant potential for education.
Augmented reality has significant potential for education.
So much of physiotherapy practice – and therefore, physiotherapy education – is premised on the idea that what has been important over the last 50 years will continue to be important for the next 50. However, as technology progresses and we see incredible advances in the integration of technology into medicine and health systems, we need to ask if the next 50 years are going to look anything like the last 50. In fact, it almost seems as if the most important skill we can teach our students is how to adapt to a constantly changing world. If this is true, then we may need to radically change what we prioritise in the curriculum, as well as how we teach students to learn. When every fact is instantly available, when algorithms influence clinical decision-making, when amputees are fitted with robotic prosthetics controlled directly via brain-computer interfaces…where does that leave the physiotherapist? This project is a first step (for me) towards at least beginning to think about these kinds of questions.

 

I enjoyed reading (July)

Artificial Intelligence Is Now Telling Doctors How to Treat You (Daniela Hernandez)

Artificial intelligence is still in the very early stages of development–in so many ways, it can’t match our own intelligence–and computers certainly can’t replace doctors at the bedside. But today’s machines are capable of crunching vast amounts of data and identifying patterns that humans can’t. Artificial intelligence–essentially the complex algorithms that analyze this data–can be a tool to take full advantage of electronic medical records, transforming them from mere e-filing cabinets into full-fledged doctors’ aides that can deliver clinically relevant, high-quality data in real time.

Carl Sagan on Science and Spirituality (Maria Popova)

Plainly there is no way back. Like it or not, we are stuck with science. We had better make the best of it. When we finally come to terms with it and fully recognize its beauty and its power, we will find, in spiritual as well as in practical matters, that we have made a bargain strongly in our favor.

But superstition and pseudoscience keep getting in the way, distracting us, providing easy answers, dodging skeptical scrutiny, casually pressing our awe buttons and cheapening the experience, making us routine and comfortable practitioners as well as victims of credulity.

Is it OK to be a luddite?

Perhaps, there is some middle-ground, not skepticism or luddism, but what Sean calls digital agnosticism. So often in our discussions of online education and teaching with technology, we jump to a discussion of how or when to use technology without pausing to think about whether or why. While we wouldn’t advocate for a new era of luddism in higher education, we do think it’s important for us to at least ask ourselves these questions.

We use technology. It seduces us and students with its graphic interfaces, haptic touch-screens, and attention-diverting multimodality. But what are the drawbacks and political ramifications of educational technologies? Are there situations where tech shouldn’t be used or where its use should be made as invisible as possible?

Reclaiming the Web for the Next Generation (Doug Belshaw):

Those of us who have grown up with the web sort-of, kind-of know the mechanics behind it (although we could use a refresher). For the next generation, will they know the difference between the Internet and Google or Facebook? Will they, to put it bluntly, know the difference between a public good and a private company?

7 things good communicators must not do (Garr Reynolds): Reynolds creates a short list of items taken from this TED Talk by Julian Treasure. If you can’t watch the video, here are the things to avoid:

1. Gossip
2. Judgement
3. Negativity
4. Complaining
5. Excuses
6. Exaggeration (lying)
7. Dogmatism
Reynolds added another item to the list; 8. Self-absorption

Personal Learning Networks, CoPs Connectivism: Creatively Explained (Jackie Gerstein): Really interesting post demonstrating student examples of non-linguistical knowledge representation.

The intent of this module is to assist you in developing a personalized and deep understanding of the concepts of this unit – the concepts that are core to using social networking as a learning venue. Communities of Practice, Connectivism, Personal Learning Networks, create one or a combination of the following to demonstrate your understanding of these concepts: a slide show or Glog of images, an audio cast of sounds, a video of sights, a series of hand drawn and scanned pictures, a mindmap of images, a mathematical formula, a periodic chart of concepts, or another form of nonlinguistic symbols. Your product should contain the major elements discussed in this module: CoPs, Connectivism, and Personal Learning Networks. These are connected yet different concepts. As such they should be portrayed as separate, yet connected elements.

The open education infrastructure, and why we must build it (Davis Wiley)

Open Credentials
Open Assessments
Open Educational Resources
Open Competencies

This interconnected set of components provides a foundation which will greatly decrease the time, cost, and complexity of the search for innovative and effective new models of education.

I enjoyed reading (March)

screen-free-ideas-for-kids4

The web as a universal standard (Tony Bates): It wasn’t so much the content of this post that triggered my thinking, but the title. I’ve been wondering for a while what a “future-proof” knowledge management database would look like. While I think the most powerful ones will be semantic (e.g. like the KDE desktop integrated with the semantic web), there will also be a place for standardised, text-based media like HTML.

 

The half-life of facts (Maria Popova):

Facts are how we organize and interpret our surroundings. No one learns something new and then holds it entirely independent of what they already know. We incorporate it into the little edifice of personal knowledge that we have been creating in our minds our entire lives. In fact, we even have a phrase for the state of affairs that occurs when we fail to do this: cognitive dissonance.

 

How parents normalised password sharing (danah boyd):

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

 

Academic research published as a graphic novel (Gareth Morris): Over the past few months I’ve been thinking about different ways for me to share the results of my PhD (other than the papers and conference presentations that were part of the process). I love the idea of using stories to share ideas, but had never thought about presenting research in the form of a graphic novel.

product_thumbnail

 

Getting rich off of schoolchildren (David Sirota):

You know how it goes: The pervasive media mythology tells us that the fight over the schoolhouse is supposedly a battle between greedy self-interested teachers who don’t care about children and benevolent billionaire “reformers” whose political activism is solely focused on the welfare of kids. Epitomizing the media narrative, the Wall Street Journal casts the latter in sanitized terms, reimagining the billionaires as philanthropic altruists “pushing for big changes they say will improve public schools.”

The first reason to scoff at this mythology should be obvious: It simply strains credulity to insist that pedagogues who get paid middling wages but nonetheless devote their lives to educating kids care less about those kids than do the Wall Street hedge funders and billionaire CEOs who finance the so-called reform movement. Indeed, to state that pervasive assumption out loud is to reveal how utterly idiotic it really is, and yet it is baked into almost all of today’s coverage of education politics.

 

The case for user agent extremism (Anil Dash): Anil’s post has some close parallels with this speech by Eben Moglen, that I linked to last month. The idea that, as technology becomes increasingly integrated into our lives, the more control we are losing. We all need to become invested in wresting control of our digital lives and identities back from corporations, although how exactly to do that is a difficult problem.

The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations.

 

Singularities and nightmares (David Brin):

Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency — establishing robust systems that can deal with almost any problem as it arises.

 

Is AI near a takeoff point? (J. Storrs Hall):

Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program — with guns) will have to change.

 

Notes on podcast from Stephen Downes

I thought I’d make some notes while listening to this podcast interview from Stephen Downes., where he talks about personal learning environments, problems with e-learning and open vs. closed educational content.

————————————

Mentions Plearn as part of the opening discussion and bio.

What is a PLE? Compares LMS to PLE. LMS is based around the institution, and when the student leaves the system, they lose access to that learning. Same applies when changing institutions, or learning in different environments. PLE provides access to services and educational services from a personal space, rather than an institutional one.

Very new category of “learning system” right now, so there are no applications that exist that define a PLE. Rather, it’s a generic collection of tools and concepts.

Most resources are accessed on the fly, through the browser. Some people have small libraries that they keep locally, but only for backup purposes or content they need to access offline. Students will access lectures as audio and video streams if available. I disagree with the assumption that we’re all connected all the time and that there is no longer a need to download content to be kept locally.

There’s always going to be a mix of local and remote content that’s relevant for learning. A PLE should support whatever works best / whatever the learner needs in whatever context.

Discussed the Khan academy and the role of online video (YouTube) as an educational resource. Quality of the video production isn’t as important as the quality of the video content. The problem is that the video format is linear, which means that it consumes time, it isn’t searchable (it’s not random access). You can’t find the specific piece of information you’re looking for. Content can be more efficiently acquired through text and images.

Videos are also not social or interactive (although video conferences are). Skype conferencing mentioned. Contextual, flexible teaching and learning isn’t really possible when watching video.

Classrooms are not especially well designed for personal learning “1 size fits 30+”.

Is artificial intelligence a viable approach to education? “Going to be tricky”. Some components of the concept available in primitive recommender algorithms currently present in Amazon, iTunes, etc. But going to be a long time before true AI is going to be able to truly personalise the learning experience.

Software will continue to get smarter and understand more and more about what we want to do. It will be able to aggregate, filter, categorise content dynamically.

Discussion on online identity as a tangent to the above point i.e. that your point of entry into the network (i.e. the browser) would be the software that would aggregate, etc. the content you’re interested in. Downes created a tool that did something like this, but which was subsequently superseded by OpenID. Also a brief mention of OAuth.

Briefly talked about SCORM / IMS and the Common Cartridge format (i.e. learning objects). Useful for closed organisations’ learning requirements e.g. the military. Not useful for learning content that needs to be interactive and to engage with other environments / scenarios. Doesn’t do much for the social component and is unnecessarily complex in trying to create “units of knowledge”. The best model is the open web. Many companies trying to create common formats, but also lock consumers in.

Not an easy, decentralised way to create a “learning” management system. But the context there is in managing students or content, not learning. Nothing wrong with the LMS to manage students, but it’s not about learning. How do you give people the freedom to learn in a personal way?

Ends with some discussion on revenue, profit and commercial aspects of education.