UCT seminar: Shaping our algorithms

Tomorrow I’ll be presenting a short seminar at the University of Cape Town on a book chapter that was published earlier this year, called Shaping our algorithms before they shape us. Here are the slides I’ll be using, which I think are a useful summary of the chapter itself.

This slideshow requires JavaScript.

Book chapter published: Shaping our algorithms before they shape us

I’ve just had a chapter published in an edited collection entitled: Artificial Intelligence and Inclusive Education: Speculative Futures and Emerging Practices. The book is edited by Jeremy Knox, Yuchen Wang and Michael Gallagher and is available here.

Here’s the citation: Rowe M. (2019) Shaping Our Algorithms Before They Shape Us. In: Knox J., Wang Y., Gallagher M. (eds) Artificial Intelligence and Inclusive Education. Perspectives on Rethinking and Reforming Education. Springer, Singapore. https://doi.org/10.1007/978-981-13-8161-4_9.

And here’s my abstract:

A common refrain among teachers is that they cannot be replaced by intelligent machines because of the essential human element that lies at the centre of teaching and learning. While it is true that there are some aspects of the teacher-student relationship that may ultimately present insurmountable obstacles to the complete automation of teaching, there are important gaps in practice where artificial intelligence (AI) will inevitably find room to move. Machine learning is the branch of AI research that uses algorithms to find statistical correlations between variables that may or may not be known to the researchers. The implications of this are profound and are leading to significant progress being made in natural language processing, computer vision, navigation and planning. But machine learning is not all-powerful, and there are important technical limitations that will constrain the extent of its use and promotion in education, provided that teachers are aware of these limitations and are included in the process of shepherding the technology into practice. This has always been important but when a technology has the potential of AI we would do well to ensure that teachers are intentionally included in the design, development, implementation and evaluation of AI-based systems in education.

Comment: DeepMind Can Now Beat Us at Multiplayer Games, Too

DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do…Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.

Metz, C. (2019). DeepMind Can Now Beat Us at Multiplayer Games, Too. New York Times.

The problem with arguments like this is that 1) we end up playing semantic games about what words mean, 2) what we call the computer’s achievement isn’t relevant, and 3) just because the algorithmic solution doesn’t look the same as a human solution doesn’t make it less effective.

The concern around the first point is that, as algorithms become more adept at solving complex problems, we end up painting ourselves into smaller and smaller corners, hemmed in by how we defined the characteristics necessary to solve those problems. In this case, we can define collaboration in a way that means that algorithms aren’t really collaborating but tomorrow when they can collaborate according to today’s definition, we’ll see people wanting to change the definition again.

The second point relates to competence. Algorithms are designed to be competent at solving complex problems, not to solve them in ways that align with our definitions of what words mean. In other words, DeepMind doesn’t care how the algorithm solves the problem, only that it does. Think about developing a treatment for cancer…will we care that the algorithm didn’t work closely with all stakeholders, as human teams would have to, or will it only matter that we have an effective treatment? In the context of solving complex problems, we care about competence.

And finally, why would it matter that algorithmic solutions don’t look the same as human solutions? In this case, human game-players have to communicate in order to work together because it’s impossible for them to do the computation necessary to “completely understand what is happening in the game”. If we had the ability to do that computation, we’d also drop “communication” requirement because it would only slow us down and add nothing to our ability to solve the problem.

Article published – An introduction to machine learning for clinicians

It’s a nice coincidence that my article on machine learning for clinicians has been published at around the same time that my poster on a similar topic was presented at WCPT. I’m quite happy with this paper and think it offers a useful overview of the topic of machine learning that is specific to clinical practice and which will help clinicians understand what is at times a confusing topic. The mainstream media (and, to be honest, many academics) conflate a wide variety of terms when they talk about artificial intelligence, and this paper goes some way towards providing some background information for anyone interested in how this will affect clinical work. You can download the preprint here.


Abstract

The technology at the heart of the most innovative progress in health care artificial intelligence (AI) is in a sub-domain called machine learning (ML), which describes the use of software algorithms to identify patterns in very large data sets. ML has driven much of the progress of health care AI over the past five years, demonstrating impressive results in clinical decision support, patient monitoring and coaching, surgical assistance, patient care, and systems management. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyze and interpret the complex interactions between data, patients, and clinical decision-makers. However, as this technology becomes more powerful it also becomes less transparent, and algorithmic decisions are therefore increasingly opaque. This is problematic because computers will increasingly be asked for answers to clinical questions that have no single right answer, are open-ended, subjective, and value-laden. As ML continues to make important contributions in a variety of clinical domains, clinicians will need to have a deeper understanding of the design, implementation, and evaluation of ML to ensure that current health care is not overly influenced by the agenda of technology entrepreneurs and venture capitalists. The aim of this article is to provide a non-technical introduction to the concept of ML in the context of health care, the challenges that arise, and the resulting implications for clinicians.

Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

‘The discourse is unhinged’: how the media gets AI alarmingly wrong

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”. According to Lipton, in recent years broader interest in topics like “machine learning” and “deep learning” has led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic”.

Source: Schwartz, O. (2018). ‘The discourse is unhinged’: how the media gets AI alarmingly wrong.

There’s a lot of confusion around what we think of as AI. For most people who are actually working in the field, the current state of AI and machine learning research present their findings as the solution to very narrowly constrained problems that are the result of the statistical manipulation of large data sets expressed within certain confidence intervals. There’s no talk of consciousness, choice, or values of any kind. To be clear, this is “intelligence” as defined within very specific parameters. It’s important that clinicians and educators (and everyone else, actually) at least understand at a basic level what we mean when we say “artificial intelligence”.

Of course, there are also people working on issues of artificial general intelligence and superintelligence, which is different to the narrow (or weak) intelligence that is being reported when we see today’s sensationalist headlines.

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.

OpenPhysio abstract: Artificial intelligence in clinical practice – Implications for physiotherapy education

Here is the abstract of a paper I recently submitted to OpenPhysio, a new open-access journal with an emphasis on physiotherapy education.

About 200 years ago the invention of the steam engine ushered in an era of unprecedented development and growth in human social and economic systems, whereby human labour was supplanted by machines. The recent emergence of artificially intelligent machines has seen human cognitive capacity augmented by computational agents that are able to recognise previously hidden patterns within massive data sets. The characteristics of this second machine age are already influencing all aspects of society, creating the conditions for disruption to our social, economic, education, health, legal and moral systems, and which will likely to have a far greater impact on human progress than did the steam engine. As AI-based technology becomes increasingly embedded within devices, people and systems, the fundamental nature of clinical practice will evolve, resulting in a healthcare system requiring profound changes to physiotherapy education. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and the newly-constituted care teams that will emerge. This paper describes some of the possible influences of AI-based technologies on physiotherapy practice, and the subsequent ways in which physiotherapy education will need to change in order to graduate professionals who are fit for practice in a 21st century health system.

Read the full paper at OpenPhysio (note that this article is still under review).

altPhysio | Technology as infrastructure

This is the fourth post in my altPhysio series, where I’m exploring alternative ways of thinking about a physiotherapy curriculum by imagining what a future school might look like. This post is a bit longer than other because this is an area I’m really interested in and spend a lot of time thinking about. I’ve also added more links to external sources because some of this stuff sounds like science fiction. The irony is that everything in this post describes technology that currently exists, and as long as we’re thinking about whether or not to share PowerPoint slide we’re not paying attention to what’s important. This post was a ton of fun to write.

Q: Can you talk a little bit about the history of technology integration in health professions education? Maybe over the last decade or so.

In the early part of the 21st century we saw more institutions starting to take the integration of technology seriously. Unfortunately the primary use of digital services at the time was about moving content around more efficiently. Even though the research was saying that the content component was less important for learning than the communication component, we still saw universities using the LMS primarily to share notes and presentations with students.

The other thing is that we were always about 5-10 years behind the curve when it came to the adoption of technology. For examples, wikis started showing up in the medical education literature almost 10 years after they were invented. The same with MOOCs. I understand the need to wait and see how technologies stabilise and then choosing something that’s robust and reliable. But the challenge is that you lose out on the early mover advantages of using the technology early. That’s why we tend to adopt a startup mentality to how we use technology at altPhysio.

Q: What do you mean by that? How is altPhysio like a startup?

We pay attention to what’s on the horizon, especially the emerging technologies that have the potential to make an impact on learning in 1, 2 and 5 year time frames. We decided that we weren’t going to wait and see what technologies stabilised and would rather integrate the most advanced technologies available at the time. We designed our programme to be flexible and to adapt to change based on what’s happening around us. When the future is unknowable because technological advances are happening faster than you can anticipate, you need a system that can adapt to the situations that emerge. We can’t design a rigid curriculum that attempts to guess what the future holds. So we implement and evaluate rapidly, constantly trying out small experiments with small groups of students.

Once we decided that we’d be proactive instead of reactive in how we use and think about technology, we realised that we’d need a small team in the school who are on the lookout for technologies that have the potential to enhance the curriculum. The team consists of students and staff who identify emerging technologies before they become mainstream, prepare short reports for the rest of the school, recruit beta testers and plan small scale research projects that highlight the potential benefits and challenges of implementing the technology at scale.

We’ve found that this is a great way for students to invest themselves in their own learning, drive research in areas they are interested in, take leadership roles and manage small projects. Staff on the team act as supervisors and mentors, but in fact are often students themselves, as both groups push each other further in terms of developing insights that would not be possible working in isolation.

Q: But why the emphasis on technology in health professions education? Isn’t this programme about developing physiotherapists?

The WHO report on the use of elearning for undergraduate health professional education called for the integration of technology into the curriculum, as did the Lancet Commission report. And it wasn’t just about moving content more efficiently in the system but rather to use technology intentionally to change how we think about the curriculum and student learning. The ability to learn is increasingly mediated by digital and information literacy and we want our students’ learning potential to be maximised.

Low levels of digital literacy in the 21st century is akin to a limited ability to read and write in the past. Imagine trying to learn in the 20th century without being able to read and write? Well, that’s what it’s like trying to learn today if you don’t have a grasp of how digital technologies mediate your construction of knowledge. Integrating technology is not about adding new gadgets or figuring out how to use Facebook groups more effectively.

Technology is an infrastructure that can be used to open up and enhance student’s learning, or to limit it. Freire said that there’s no such thing as a neutral education process, and we take seriously the fact that the technologies we use have a powerful influence on students’ learning.

Q: How do you develop digital and information literacy alongside the competencies that are important for physiotherapists? Doesn’t an emphasis on technology distract students from the core curriculum?

We don’t offer “Technology” as something separate to the physiotherapy curriculum, just as you don’t offer “Pen and paper” as something that is separate. The ability to use a pen and paper used to be an integral and inseparable aspect of learning, and we’ve just moved that paradigm to now include digital and information literacy. Technology isn’t separate to learning, it’s a part of learning just like pen and paper used to be.

Digital and information literacy is integrated into everything that happens at the school. For example, when a new student registers they immediately get allocated a domain on the school servers, along with a personal URL. A digital domain of their own where they get to build out their personal learning environment. This is where they make notes, pull in additional resources like books and video, and work on their projects. It’s a complete online workspace that allows individual and collaborative work and serves as a record of their progress through the programme. It’s really important to us that students learn how to control the digital spaces that they use for learning, and that they’re able to keep control over those spaces after they graduate.

When students graduate, their personal curriculum goes with them, containing the entire curriculum (every resource we shared with them) as well as every artefact of their learning they created, and every resource that they pulled in themselves. Our students never lose the content that they aggregated over the duration of the programme, but more importantly, they never lose the network they built over that time. The learning network is by far the most important part of the programme, and includes not only the content relationships they’ve formed during the process but includes all interactions with their teachers, supervisors, clinicians and tutors.

Q: Why is it important for students to work in digital space, as well as physical space? And how do your choices about online spaces impact on students’ learning?

Think about how the configuration of physical space in a 20th century classroom dictated the nature of interactions that were possible in that space. How did the walls, desks and chairs, and the position of the lecturer determine who spoke, for example? Who moved? Who was allowed to move? How was work done in that space? Think about how concepts of “front” and “back” (in a classroom) have connotations for how we think about who sits where.

Now, how does the configuration of digital space change the nature of the interactions that are possible in that space? How we design the learning environment (digital or physical) not only enables or disables certain kinds of interactions, but it says something about how we think about learning. Choosing one kind of configuration over another articulates a set of values. For example, we value openness in the curriculum, from the licensing of our course materials, to the software we build on. This commitment to openness says something about who we are and what is important to us.

The fact that our students begin here with their own digital space – a personal learning environment – that they can configure in meaningful ways to enhance their potential for learning, sends a powerful message. Just like the physical classroom configuration changes how power is manifested, so can the digital space. Our use of technology tells students that they have power in terms of making choices with respect to their learning.

To go back to your question about the potential for technology to distract students from learning physiotherapy; did you ever think about how classrooms – the physical configuration of space – distracted students from learning? Probably not. Why not?

Q: You mentioned that openness is an important concept in the curriculum. Can you go into a bit more detail about that?

Maybe it would be best to use a specific example because there are many ways that openness can be defined. Our curriculum is an open source project that gives us the ability to be as flexible and adaptable as a 21st century curriculum needs to be. It would be impossible for us to design a curriculum that was configured for every student’s unique learning needs and that was responsive to a changing social context, so we started with a baseline structure that could be modified over time by students.

We use a GitHub repository to host and collaborate on the curriculum. Think of a unique instance of the curriculum that is the baseline version – the core – that is hosted on our servers. When a student registers, we fork that curriculum to create another, unique instance on the students personal digital domain. At this moment, the curriculum on the student’s server is an exact copy of the one we have but almost immediately, the students’ version is modified based on their personal context. For example, the entire curriculum – including all of the content associated with the programme – is translated into the student’s home language if they choose so. Now that it’s on their server, they can modify it to better suit them, using annotation and editing tools, and allowing them to integrate external resources into their learning environment.

One of the most powerful features of the system is that it allows for students to push ideas back into our core curriculum. They make changes on their own versions and if they’d like to see that change implemented across the programme, they send us a “Pull” request, which is basically a message that shows the suggested change along with a comment for why the student wants it. It’s a feedback mechanism for them to send us signals on what works well and what can be improved. It enables us to constantly refine and improve the curriculum based on real time input from students.

On top of this, every time we partner with other institutions, they can fork the curriculum and modify it to suit their context, and then push the changes back upstream. This means that the next time someone wants to partner with us, the core curriculum they can choose from is bigger and more comprehensive. For example, our curriculum is now the largest database of case studies in the world because most institutions that fork the curriculum and make their own changes also send those changes back to the core.

Q: You have a very different approach to a tutorial system. Tell us about how tutors are implemented in your school.

The tutors at altPhysio are weak AI agents – relatively simple artificial general intelligence algorithms that perform within very narrow constraints that are linked to basic tasks associated with student learning. Students “connect” with their AI tutors in the first week of the programme, which for the most part involves downloading an app onto their phones. This is then sync’d across all of their other devices and digital spaces, including laptops, wearables and cloud services, so that the AI is “present” in whatever context the student is learning.

As AI has become increasingly commoditised in the last decade, AI as a service has allowed us to take advantage of features that enhance learning. For example, a student’s tutor will help her with establishing a learning context, finding content related to that context, and reasoning through the problems that arise in the context. In addition, the AIs help students manage time on task, remind them about upcoming tasks and the associated preparation for those tasks, and generally keep them focused on their learning.

Over time the algorithms evolve with students, becoming increasingly tied to them and their own personal learning patterns. While all AI tutors begin with the same structure and function they gradually become more tightly integrated with the student. Some of the more adventurous students have had the AIs integrated with neural lace implants, which has obviously significantly accelerated their ability to function at much higher levels and at much greater speeds than the rest of us. These progressions have obviously made us think very differently about assessment, obviously.

Q: What about technology used during lectures? Is there anything different to what you’ve already mentioned?

Lectures have a different meaning here than at other institutions, and I suspect we’ll talk about that later. Anyway, during lectures the AI tutors act as interpreters for the students, performing real time translation for our international speakers, as well as doing speech-to-text transcription in real time. This means that our deaf students get all speech converted to Braille in real time, which is pretty cool. All the audio, video and text that is generated during lectures is saved, edited and sync’d to the students personal domains where they’re available for recall later.

Our students use augmented reality a lot in the classroom and clinical context, where students overlay digital information on their visual fields in order to get more context in the lecture. For example, while I’m talking about movement happening at the elbow, the student might choose to display the relevant bones, joints and muscles responsible for the movement. As the information is presented to them, they can choose to save that additional detail into the point in the lecture that I discussed it, so that when they’re watching the video of the lecture later, the additional information is included. We use this system a lot for anatomy and other movement- and structure-type classes.

microsoft-hololens-medical-studies

Q: That sounds like a pretty comprehensive overview of how technology has some important uses beyond making content easier to access. Any final thoughts?

Technology is not something that we “do”, it’s something that we “do things with”. It enables more powerful forms of communication and interaction, both in online and physical spaces, and to think of it in terms of another “platform” or “service” is to miss the point. It amplifies our ability to do things in the world and just because it’s not cheap or widely distributed today doesn’t mean it won’t be in the future.

In 2007 the iPhone didn’t exist. Now every student in the university carries in their pocket a computer more powerful than the ones we used to put men on the moon. We should be more intentional about how we use that power, and forget about whatever app happens to be trending today.

 

Technology will make lecturers redundant – but only if they let it

Technology will make lecturers redundant — but only if they let it

This article was originally published on The Conversation. Read the original article.

A teacher walks into a classroom and begins a lesson. As she speaks, the audio is translated in real time into a variety of languages that students have pre-selected, so each can hear the lecturer’s voice in their own language. It can even be delivered directly into their auditory canal so that it does not disturb other students. The lecturer’s voice is also transcribed in real-time, appearing in a display that presents digital content over the students’ visual field.

As the lesson progresses, students identify concepts they feel need further clarification. They submit highly individual queries to search engines that use artificial intelligence algorithms to filter and synthesise results from a variety of sources. This information is presented in their augmented reality system, along with the sources used, and additional detail in the form of images and animations.

microsoft-hololens-medical-studies

All of the additional information gathered by students is collated into a single set of notes for the lesson, along with video and audio recordings of the interactions. It’s then published to the class server.

This isn’t science fiction. All of the technology described here currently exists. Over time it will become more automated, economical and accurate.

What does a scenario like the one described here mean for lecturers who think that “teaching” means selecting and packaging information for students? There are many excellent theoretical reasons for why simply covering the content or “getting through the syllabus” has no place in higher education. But for the purposes of this article I’ll focus on the powerful practical reasons that lecturers who merely cover the content are on a guaranteed path to redundancy.

The future isn’t coming – it’s here

The technology described above may sound outlandish and seem totally out of most students’ reach. But consider the humble – and ubiquitous – smartphone. A decade ago, the iPhone didn’t exist. Five years ago most students in my classes at a South African university didn’t have smartphones. Today, most do. Research shows that this growth is mirrored across Africa. The first cellphones were prohibitively expensive, but now smartphones and tablets are handed out to people opening a bank account. The technology on these phones is also becoming increasingly powerful, and will continue to advance so that what is cutting edge today will be mainstream in about five years’ time.

This educational technology can change the way that university students learn. But ultimately, machines can’t replace teachers. Unless, that is, teachers are just selecting and packaging content with a view to “getting through the syllabus”. As demonstrated above, computers and algorithms are becoming increasingly adept at the filtering and synthesis of specialised information. Teachers who focus on the real role of universities – teaching students how to think deeply and critically – and who have an open mind, needn’t fear this technology.

Crucial role of universities

In a society where machines are taking over more and more of our decision-making, we must acknowledge that the value of a university is not the academics who see their work as controlling access to specialised knowledge.

Rather, it’s that higher education institutions constitute spaces that encourage in-depth investigation into the nature of the world. The best university teachers don’t just focus on content because doing so would reduce their roles to information filters who simply make decisions about what content is important to cover.

Digital tools are quickly getting to the point where algorithms will outperform experts, not only in filtering content but also in synthesising it. Teachers should embrace technology by encouraging their students to build knowledge through digital networks both within and outside the academy. That way they will never become redundant. And they’ll ensure that their graduates are critical thinkers, not just technological gurus.The Conversation