UCT seminar: Shaping our algorithms

Tomorrow I’ll be presenting a short seminar at the University of Cape Town on a book chapter that was published earlier this year, called Shaping our algorithms before they shape us. Here are the slides I’ll be using, which I think are a useful summary of the chapter itself.

This slideshow requires JavaScript.

Book chapter published: Shaping our algorithms before they shape us

I’ve just had a chapter published in an edited collection entitled: Artificial Intelligence and Inclusive Education: Speculative Futures and Emerging Practices. The book is edited by Jeremy Knox, Yuchen Wang and Michael Gallagher and is available here.

Here’s the citation: Rowe M. (2019) Shaping Our Algorithms Before They Shape Us. In: Knox J., Wang Y., Gallagher M. (eds) Artificial Intelligence and Inclusive Education. Perspectives on Rethinking and Reforming Education. Springer, Singapore. https://doi.org/10.1007/978-981-13-8161-4_9.

And here’s my abstract:

A common refrain among teachers is that they cannot be replaced by intelligent machines because of the essential human element that lies at the centre of teaching and learning. While it is true that there are some aspects of the teacher-student relationship that may ultimately present insurmountable obstacles to the complete automation of teaching, there are important gaps in practice where artificial intelligence (AI) will inevitably find room to move. Machine learning is the branch of AI research that uses algorithms to find statistical correlations between variables that may or may not be known to the researchers. The implications of this are profound and are leading to significant progress being made in natural language processing, computer vision, navigation and planning. But machine learning is not all-powerful, and there are important technical limitations that will constrain the extent of its use and promotion in education, provided that teachers are aware of these limitations and are included in the process of shepherding the technology into practice. This has always been important but when a technology has the potential of AI we would do well to ensure that teachers are intentionally included in the design, development, implementation and evaluation of AI-based systems in education.

a16z Podcast: Revenge of the Algorithms (Over Data)… Go! No?

An interesting (and sane) conversation about the defeat of AlphaGo by AlphaGo Zero. It almost completely avoids the science-fiction-y media coverage that tends to emphasise the potential for artificial general intelligence and instead focuses on the following key points:

  • Go is a stupendously difficult board game for computers to play but it’s a game in which both players have total information and where the rules are relatively simple. This does not reflect the situation in any real-world decision-making scenario. Correspondingly, this is necessarily a very narrow definition of what an intelligent machine can do.
  • AlphaGo Zero represents an order of magnitude improvement in algorithmic modelling and power consumption. In other words, it does a lot more with a lot less.
  • Related to this, AlphaGo Zero started from scratch, with humans providing only the rules of the game. So Zero used reinforcement learning (rather than supervised learning) to figure out the same (and in some cases, better) moves than human beings have done over the last thousand years or so).
  • It’s an exciting achievement but shouldn’t be conflated with any significant step towards machine intelligence that transfers beyond highly constrained scenarios.

Here’s the abstract from the publication in Nature:

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Another Terrible Idea from Turnitin | Just Visiting

Allowing the proliferation of algorithmic surveillance as a substitution for human engagement and judgment helps pave the road to an ugly future where students spend more time interacting algorithms than instructors or each other. This is not a sound way to help writers develop robust and flexible writing practices.

Source: Another Terrible Idea from Turnitin | Just Visiting

First of all, I don’t use Turnitin and I don’t see any good reason for doing so. Combating the “cheating economy” doesn’t depend on us catching the students; it depends on creating the conditions in which students believe that cheating offers little real value relative to the pedagogical goals they are striving for. In general, I agree with a lot that the author is saying.

So, with that caveat out of the way, I wanted to comment on a few other pieces in the article that I think make significant assumptions and limit the utility of the piece, especially with respect to how algorithms (and software agents in particular) may be useful in the context of education.

  • The use of the word “surveillance” in the quote above establishes the context for the rest of the paragraph. If the author had used “guidance” instead, the tone would be different. Same with “ugly”; remove that word and the meaning of the sentence is very different. It just makes it clear that the author has an agenda which clouds some of the other arguments about the use of algorithms in education.
  • For example, the claim that it’s a bad thing for students to interact with an algorithm instead of another person is empirical; it can be tested. But it’s presented here in a way that implies that human interaction is simply better. Case closed. But what if we learned that algorithmic guidance (via AI-based agents/tutors) actually lead to better student outcomes than learning with/from other people? Would we insist on human interaction because it would make us feel better? Why not test our claims by doing the research before making judgements?
  • The author uses a moral argument (at least, this was my take based on the language used) to position AI-based systems (specifically, algorithms) as being inherently immoral with respect to student learning. There’s a confusion between the corporate responsibility of a private company – like Turnitin – to make a profit, and the (possibly pedagogically sound) use of software agents to enhance some aspects of student learning.

Again, there’s some good advice around developing assignments and classroom conditions that make it less likely that students will want to cheat. This is undoubtedly a Good Thing. However, some of the claims about the utility of software agents are based on assumptions that aren’t necessarily supported by the evidence.

The Future of Artificial Intelligence Depends on Trust

To open up the AI black box and facilitate trust, companies must develop AI systems that perform reliably — that is, make correct decisions — time after time. The machine-learning models on which the systems are based must also be transparent, explainable, and able to achieve repeatable results.

Source: Rao, A. & Cameron, E. (2018). The Future of Artificial Intelligence Depends on Trust.

It still bothers me that we insist on explainability for AI systems while we’re quite happy for the decisions of clinicians to remain opaque, inaccurate, and unreliable. We need to move past the idea that there’s anything special about human intuition and that algorithms must satisfy a set of criteria that we would never dream of applying to ourselves.

Defensive Diagnostics: the legal implications of AI in radiology

Doctors are human. And humans make mistakes. And while scientific advancements have dramatically improved our ability to detect and treat illness, they have also engendered a perception of precision, exactness and infallibility. When patient expectations collide with human error, malpractice lawsuits are born. And it’s a very expensive problem.

Source: Defensive Diagnostics: the legal implications of AI in radiology

There are few things to note in this article. The first, and most obvious, was that we have a much higher standard for AI-based expert systems (i.e. algorithmic diagnosis and prediction) than we do for human experts. Our expectations for algorithmic clinical decision-making are far more exacting than those we have for physicians. It seems strange that we accept the fallibility of human beings but expect nothing less than perfection from AI-based systems. [1]

Medical errors are more frequent than anyone cares to admit. In radiology, the retrospective error rate is approximately 30% across all specialities, with real-time error rates in daily practice averaging between 3% and 5%.

The second takeaway was that one of the most significant areas of influence for AI in clinical settings may not be in the primary diagnosis but rather the follow up analysis that  highlights potential mistakes that the clinician may have made. These applications of AI for secondary diagnostic review will be cheap and won’t add any additional workload to healthcare professionals. They will simply review the clinician’s conclusion and flag those cases that may benefit from additional testing. Of course, this will probably be driven by patient litigation.


[1] Incidentally, the same principle seems to be true for self-driving cars; we expect nothing but a perfect safety record for autonomous vehicles but are quite happy with the status quo for human drivers (1.2 million traffic-related deaths in a single year). Where is the moral panic around the mass slaughter of human beings by human drivers? If an algorithm is only slightly safer than a human being behind the wheel of a car it would result in thousands fewer deaths per year. And yet it feels like we’re going to delay the introduction of autonomous cars until they meet some perfect standard. To me at least, that seems morally wrong.

Algorithms are not robots

We should stop using images of humanoid robots to represent an embodied form of artificial intelligence, especially when the AI being referenced is an algorithm, which in almost all cases in the mainstream media, it is. It’s confusing for readers because we’re nowhere near the kind of general intelligence that these pictures imply. For the foreseeable future, “AI” is a set of machine learning algorithms that “maximise a reward function” and is incapable of anything more than solving very specific problems with a lot of help.

AI isn’t magic, it’s just maths. I know that statistical methods aren’t as cool as the androids but if we really want people to get a better conceptual understanding of AI we’d be better off using images like this to illustrate the outputs of AI-based systems:

A.I. Versus M.D. What happens when diagnosis is automated?

The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.

Source: A.I. Versus M.D. What happens when diagnosis is automated?

This New Yorker article by Siddhartha Mukherjee explores the implications for practice and diagnostic reasoning in a time when software is increasingly implicated in clinical decision-making. While the article is more than a year old (a long time in AI and machine learning research), it still stands up as an excellent, insightful overview of the  state of AI-based systems in the domain of clinical care.  It’s a long read but well worth it.

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.

Physiotherapy in 2050: Ethical and clinical implications

This post describes a project that I began earlier this week with my 3rd year undergraduate students as part of their Professional Ethics module. The project represents a convergence of a few ideas that have been bouncing around in my head for a couple of years and are now coming together as a result of a proposal that I’m putting together for a book chapter for the Critical Physiotherapy Network. I’m undecided at this point if I’ll develop it into a full research proposal, as I’m currently feeling more inclined to just have fun with it rather than turn it into something that will feel more like work.

The project is premised on the idea that health and medicine – embedded within a broader social construct – will be significantly impacted by rapidly accelerating changes in technology. The question we are looking to explore in the project is: What are the moral, ethical, legal, and clinical implications for physiotherapy practice when the boundaries of medical and health science are significantly shifted as a result of technological advances?

The students will work in small groups that are allocated an area of medicine and health where we are seeing significant change as a result of the integration of advanced technology. Each week in class I will present an idea that is relevant to our Professional Ethics module (for example, the concept of human rights) and then each group will explore that concept within the framework of their topic. So, some might look at how gene therapy could influence how we think about our rights, while others might ask what it even means to be human. I’m not 100% how this is going to play out and will most likely adapt the project as we progress, taking into account student feedback and the challenges we encounter. I can foresee some groups having trouble with certain ethical constructs simply because it may not be applicable to their topic.

Exoskeletons are playing an increasingly important role in neurological rehabilitation.
Exoskeletons playing an increasingly important role in neurological rehabilitation.
The following list and questions aim to stimulate the discussion and to give some idea of what we are looking at (this list is not exhaustive and I’m still playing around with ideas – suggestions are welcome):

  1. Artificial intelligence and algorithmic ethical decision-making. Can computers be ethical? How is ethical reasoning incorporated into machines? How will ethical algorithms impact health, for example, when computers make decisions about organ transplant recipients? Can ethics programmed into machines?
  2. Nanotechnology. As our ability to manipulate our world at the atomic level advances, what changes can we expect to see for physiotherapists and physiotherapy practice? How far can we go with integrating technology into our bodies before we stop being “human”?
  3. Gene therapy. What happens when genetic disorders that provide specialisation areas for physiotherapists are eradicated through gene therapy? What happens when we can “fix” the genetic problems that lead to complications that physiotherapists have traditionally had a significant role in. For example, what will we do when cystic fibrosis is cured? What happens when we have a vaccine for HIV? Or when ALS is little more than an inconvenience?
  4. Robotics. What happens when patients who undergo amputations are fitted with prosthetics that link to the nervous system? When exoskeletons for paralysed patients are common? How much of robotic systems will students need to know about? Will exoskeletons be the new wheelchairs?
  5. Aging. What happens when the aging population no longer ages? How will physiotherapy change as the human lifespan is extended? There is an entire field of physiotherapy devoted to the management of the aging population; what will happen to that? How will palliative care change?
  6. Augmented reality. When we can overlay digital information onto our visual field, what possibilities exist for effective patient management? For education? What happens when that information is integrated with location-based data, so that patient-specific information is presented to us when we are near that patient?
  7. Virtual reality. What will it mean for training when we can build entire hospitals and patient interactions in the virtual world? When we can introduce students to the ICU in their first year? This could be especially useful when we have challenges with finding enough placements for students who need to do clinical rotations.
  8. 3D printing. What happens when we can print any equipment that we need, that is made exactly to the patient’s specifications? How will this affect the cost of equipment distribution to patients? Can 3D printed crutches be recycled? Reused by other patients? What new kinds of equipment can be invented when we are not constrained by the production lines of the companies who traditionally make the tools we use?
  9. Brain-computer interfaces. When patients are able to control computers (and by extension, everything linked to the computer) simply by thinking about it, what does that mean for their roles in the world? What does it mean when someone with a C7 complete spinal cord injury can still be a productive member of society? What does it mean for community re-integration? How will “rehabilitation” change if computer science is a requirement to even understand the tools our patients use?
  10. Quantified self. As we begin to use sensors close to our bodies (inside our phones, watches, etc.) and soon – inside our bodies – we will have access to an unprecedented amount of personal (very personal) data about ourselves. We will be able to use that data to inform decision making about our health and well-being, which will change the patient-therapist relationship. This will most likely have the effect of modifying the power differential between patients and clinicians. How will we deal with that? Are we training students to know what to do with that patient information? To understand how these sensors work?
  11. Processing power. While this is actually something that is linked to every other item in the list, it might warrant it’s own topic purely because everything else depends on the continuous improvements in processing power and parallel reduction in cost.
  12. The internet. I’m not sure about this. While the architecture of the internet itself is unlikely to change much in the next few decades (disregarding the idea that the internet as we know it might be supplanted with something better), who has access to it and how we use it will most certainly change.

An artist's depiction of a nanobot that is smaller than blood cells.
Nanobot smaller than blood cells.
I should state that we will be working under certain assumptions:

  • That the technology will not be uniformly integrated into society and health systems i.e. that wealth disparity or income inequality will directly affect implementation of certain therapies. This will,obviously have ethical and moral implications.
  • That the technology will not be freely available i.e. that corporations will license certain genetic therapies and withhold their use on those who cannot pay the license.
  • That technological progression will continue over time i.e. that regulations will not prevent, for example, further research into stem cell therapy.
  • …we may have to make additional assumptions as we move forward but this is all I can think of now

We’ll probably find that there will be significant overlap in the above topics, since some are specific technologies that will have an influence on other areas. For example, gene therapy and nanotechnology may have an impact on aging; artificial intelligence will impact many areas, as will robotics and computing power. The idea isn’t that these topics are discrete and separate, but that they provide a focus point for discussion and exploration, with the understanding that overlap is inevitable. In fact, overlap is preferable, since it will help us explore relationships between the different areas and to find connections that we maybe were not previously aware of.

Giving patients bad news in virtual spaces where we can control the interaction.
Giving patients bad news in virtual spaces where we can control the interaction.
The activities that the students engage in during this project are informed by the following ideas, which overlap with each other:

  • Authentic learning is a framework for designing learning tasks that lead to deeper engagement by students. Authentic tasks should be complex, collaborative, ill-defined, and completed over long periods.
  • Inquiry-based learning suggests that students should identify challenging questions that are aimed at addressing gaps in their understanding of complex problems. The research that they conduct is a process they go through in order to achieve outcomes, rather than being an end in itself.
  • Project-based learning is the idea that we can use full projects – based in the real world – to discuss and explore the disciplinary content, while simultaneously developing important skills that are necessary for learning in the 21st century.

I should be clear that I’m not really sure what the outcome of this project will be. I obviously have objectives for my students’ learning that relate to the Professional Ethics module but in terms of what we cover, how we cover it, what the final “product” is…these are all still quite fluid. I suppose that, ideally, I would like for us as a group (myself and the students) to explore the various concepts together and to come up with a set of suggestions that might help to guide physiotherapy education (or at least, physiotherapy education as practiced by me) over the next 5-10 years.

Augmented reality has significant potential for education.
Augmented reality has significant potential for education.
So much of physiotherapy practice – and therefore, physiotherapy education – is premised on the idea that what has been important over the last 50 years will continue to be important for the next 50. However, as technology progresses and we see incredible advances in the integration of technology into medicine and health systems, we need to ask if the next 50 years are going to look anything like the last 50. In fact, it almost seems as if the most important skill we can teach our students is how to adapt to a constantly changing world. If this is true, then we may need to radically change what we prioritise in the curriculum, as well as how we teach students to learn. When every fact is instantly available, when algorithms influence clinical decision-making, when amputees are fitted with robotic prosthetics controlled directly via brain-computer interfaces…where does that leave the physiotherapist? This project is a first step (for me) towards at least beginning to think about these kinds of questions.