Categories
AI clinical

Comment: New robot does superior job sampling blood.

The results were comparable to or exceeded clinical standards, with an overall success rate of 87% for the 31 participants whose blood was drawn. For the 25 people whose veins were easy to access, the success rate was 97%. The device includes an ultrasound image-guided robot that draws blood from veins. A fully integrated device, which includes a module that handles samples and a centrifuge-based blood analyzer, could be used at bedsides and in ambulances, emergency rooms, clinics, doctors’ offices and hospitals.

Rutgers University. New robot does superior job sampling blood: First clinical trial of an automated blood drawing and testing device. ScienceDaily.

This is another example of the kinds of tasks that will increasingly be performed by machines. You can argue that certain patient populations (e.g. young children, patients with mental health issues, etc.) will always need a human being performing the technique for safety reasons. And this is likely to be true for a long time. But those situations account for only a minority of the venipunctures performed; the bulk of this work will soon be done by robots that are cheaper, faster and cause less damage than human clinical staff.

Nurses are unlikely to be replaced any time soon because their work includes so much more than drawing blood. But the tasks we expect them to perform are certainly going to change. How are health professions educators in the undergraduate curriculum working to get ahead of those changes?

Categories
AI clinical

Comment: Will robots make doctors obsolete? Nothing could be further from the truth.

The problem of overdiagnosis is often mentioned in relation to two common cancers: breast and prostate. In both cases, enhanced technology is already detecting small abnormalities that may never result in harm during a lifetime. Machine-learning may trump human interpretation but merely making a diagnosis does not bring us closer to the truth about the impact of the finding. In other words, will the cancer ever cause symptoms, and crucially, will the patient die from it? How will the knowledge of cancer alter the rest of a person’s days?

Srivastava, R. (2020). Will robots make doctors obsolete? Nothing could be further from the truth. The Guardian.

I’m not a fan of the way the author starts the article; it feels a bit contrived and unlikely to reflect the patient experience of healthcare around the world. But I think that the point the author is making is that there are certain aspects of healthcare that AI and robots aren’t going to replace (she could probably have just said that?).

So yes, AI is already “better” than human beings in several different areas (e.g. diagnostics, interpretation of findings, image recognition, etc.). But no, that doesn’t mean that healthcare professionals will be replaced. Because being a doctor/physio/nurse means that we are more than interpreters of results; we are human beings in communion with other human beings. While the features of AI in clinical practice don’t mean that we’re going to see the replacement of professions, they do mean that we might see the replacement of tasks within professions.

Unfortunately, the article doesn’t get to this point and simply concludes that, because all the tasks of a doctor can’t be replaced, the question is moot. But it’s the wrong question to ask. We’re not going to replace health care providers with smart humanoid robots but we’ll definitely see changes in professional training and in clinical practice.

The implications of this are that, in order to remain relevant, professions in the near future will need to demonstrate an ability to take advantage of the benefits of advanced technologies while adapting and expanding the relationship-centred aspects of health care.

Categories
AI clinical reading research

Resource: Towards a curated library for AI in healthcare

I’ve started working on what will eventually become a curated library of resources that I’m using for my research on the impact of artificial intelligence and machine learning on clinical practice. At the moment it’s just a public repository of the articles, podcasts, blog posts that I’ve read or listened to and then saved in Zotero. You can subscribe to the feed so that when new items are added you’ll get a notification in whatever feedreader you use. Click on the image below to see the library.

The main library view in the web version of Zotero (note that the public view is different to what I’m showing here, since I have the beta version enabled; all of the functionality is the same though).

For now, it’s a public – but closed – group that has a library, meaning that anyone can see the list of library items but no-one can join the group, which means no-one else can add, edit or delete resources (for now). This is just because I’m still figuring out how it works and don’t want the additional admin of actually managing anything. I may open this up in future if it looks like anyone else is interested in joining and contributing. I’m also not sharing any of the original articles and books but will look into the implications of sharing these publicly, considering that most of them – being academic articles – are subject to copyright restrictions from the publishers.

The library/repository isn’t meant to be exhaustive but rather a small selection of articles and other resources that I think might be useful for clinicians, educators, students and researchers with an interest in AI in healthcare. At the moment it’s just a dump of some of the resources I’ve used and include notes and links associated with the resources. I’m going to revisit the items in the list and try to add more useful summaries and descriptions of everything with the idea that this could be something like a curated, annotated reading/watching/listening list for anyone with an interest in the topic.

Categories
AI research

#APaperADay – The Last Mile: Where Artificial Intelligence Meets Reality

“…implementation should be seen as an agile, iterative, and lightweight process of obtaining training data, developing
algorithms, and crafting these into tools and workflows.”

Coiera, E. (2019). The Last Mile: Where Artificial Intelligence Meets Reality. Journal of Medical Internet Research, 21(11), e16323. https://doi.org/10.2196/16323

A short article (2 pages of text) describing the challenges of building AI systems without understanding that technological solutions are only relevant when they solve real world problems that we care about, and when they are built within the systems that they will ultimately be used in.

Note: I found it hard not to just rewrite the whole paper because I really like the way Coiera writes and find that his economy with words makes it hard to cut things out i.e. I think that it’s all important text. I tried to address this by making my notes without looking at the original article, and then going back over the notes and rewriting them.


Technology shapes us as we shape it. Humans and machines form a sociotechnical system.

The application of technology should be shaped by the problem at hand and not the technology itself. But we see the opposite of this today, with companies building technologies that are then used to solve “problems” that no-one thought were problems. Most social media fits this description.

Technological innovations may create new classes of solution but it’s only in the real world that we see what problems are worth addressing and what solutions are most appropriate. Just because a technology is presented as a solution it’s up to us to make choices about whether the solution is the best solution, or whether the problem is important.

There are two broad research agendas for AI:

  1. The technical aspects of building machine intelligence.
  2. The application of machine intelligence to real world problems that we care about.

In our drive to accelerate progress in the first area, we may lose sight of the second. For example, even though image recognition is developing very quickly the use of image recognition systems has had little clinical impact to date. In some cases, it may even make clinical outcomes worse. For example when the overdiagnosis of a condition causes an increase in management (and associated costs and exposure to harm), even though treatment options remain unchanged.

There are three stages of development with data-driven technologies like AI-based systems:

  1. Data are acquired, labelled and cleaned.
  2. Building and testing technical performance in controlled environments.
  3. Algorithms are applied in real world contexts.

It’s only really in the last stage where it’s clear that “AI does nothing on its own” i.e. all technology is embedded in the sociotechnical systems mentioned earlier and are intricately connected to people and the choices that people make. This makes sociotechnical systems messy and complex, and therefore immune to the “solutions” touted by tecnology companies.

Some of the “last mile” challenges of AI implementation include:

  1. Measurement: We use standard metrics of AI performance to show improvement. But these metrics are often only useful in controlled experiments and are divorced from the practical realities of implementation in the clinical context.
  2. Generalisation and calibration: AI systems are trained on historical data and so future performance of the algorithm is dependent on how well the historical data matches the new context.
  3. Local context: The complexity of interacting variables within local contexts mean that any system will have to be fine-tuned to the organisation in which it is embedded. Organisations also change over time, meaning that the AI will need to be adjusted as well.

The author also provides possible solutions to these challenges.

Software development has moved from a linear process to an iterative model where systems are developed in situ through interaction with users in the real world. Google, Facebook, Amazon, etc. do this all the time by exposing small subsets of users to changes in the platform, and then measuring differences in engagement using metrics that the platforms care about (time spent on Facebook, or number of clicks on ads).

In healthcare we’ll need to build systems in which AI-based technologies are implemented, not as completed solutions, but with the understanding that they will need refinement and adaptation through iterative use in complex, local contexts. Ideally, they will be built within the systems they are going to be used in.


Note: I’m the Editor at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.

Categories
AI clinical

Podcast: What AI means for the physical exam

It’s a very important ritual. If you look at rituals, in general, they are all about crossing a threshold. We marry, we have baptisms, we have funerals—all with ceremony to indicate the crossing of a threshold. If we step back and look at the physical exam, it has all the trappings of ritual.

Verghese, A. (2019). Eric Topol and Abraham Verghese on What AI Means for the Physical Exam. Medicine and the Machine podcast.

A few thoughts after listening to an episode of the Medicine and the Machine podcast.

Almost immediately we get to the notion that there’s very little value in terms of data collection that happens during the physical exam. It’s clear that the validity and reliability of a lot of what we do during the “laying on of hands” is questionable. So far so good. But then the hosts start talking about the value of physical touch as part of a ritual that includes some kind of threshold crossing for the clinician and patient. This is where it starts getting a bit weird.

On the one hand, I agree that there’s a lot of ritual that frames the patient-clinician interaction and that this may even be something that patients look for. On the other hand I don’t think that this is something to be celebrated and which I believe will fall away as AI becomes more tightly integrated into healthcare. You don’t need to conduct a physical exam to signal to the patient that you’re paying attention; you can just pay attention.

Note to self: I think that there’s some potentially fruitful discussion around the links between religion and medicine that might be worth exploring at some point.

I’m also uncomfortable with some of the language used in the episode that’s reminiscent of priests, ceremony, and the mystical, and I don’t know why but it makes me think of a profession that’s in decline. There’s a parallel here if you think of religion that’s under pressure worldwide as the spaces in which God has room to move gets smaller and smaller. Not that medicine is going to go away entirely but the parts of it that try and hold onto the remnants of a past that are no longer relevant are going to become increasingly disconnected to 21st century clinical practice.

If you think that the value of the human being in the patient-clinician encounter is that we need people to enact a ritual, then surely you’ve lost the plot. There are many reasons for why this perspective is problematic but two big ones come to mind:

  1. Rituals are used to create a sense of mystery as part of a ceremony related to threshold crossing. While I think that this has value in some parts of society (e.g. becoming an adult, getting married, etc.) I don’t think it has a place in scientific endeavour.
  2. You don’t need to spend 7 years studying medicine, and then another 5 years specialising, in order to simulate some kind of threshold crossing with a patient.

Having said all that, I think the episode is still worth listening to, even if only to listen Topol and Verghese come up with dubious arguments for why it’s so important for the doctor to remain central to the clinical encounter.

Categories
AI clinical research

Survey: Physiotherapy clinicians’ perceptions of artificial intelligence in clinical practice

We know very little about how physiotherapy clinicians think about the impact of AI-based systems on clinical practice, or how these systems will influence human relationships and professional practice. As a result, we cannot prepare for the changes that are coming to clinical practice and physiotherapy education. The aim of this study is to explore how physiotherapists currently think about the potential impact of artificial intelligence on their own clinical practice.

Earlier this year I registered a project that aims to develop a better understanding of how physiotherapists think about the impact of artificial intelligence in clinical practice. Now I’m ready to move forward with the first phase of the study, which is an online survey of physiotherapy clinicians’ perceptions of AI in professional practice. The second phase will be a series of follow up interviews with survey participants who’d like to discuss the topic in more depth.

I’d like to get as many participants as possible (obviously) so would really appreciate it if you could share the link to the survey with anyone you think might be interested. There are 12 open-ended questions split into 3 sections, with a fourth section for demographic information. Participants don’t need a detailed understanding of artificial intelligence and (I think) I’ve provided enough context to make the questionnaire simple for anyone to complete in about 20 minutes.

Here is a link to the questionnaire: https://forms.gle/HWwX4v7vXyFgMSVLA.

This project has received ethics clearance from the University of the Western Cape (project number: BM/19/3/3).

Categories
AI clinical

Research project exploring clinicians’ perspectives of the introduction of ML into clinical practice

I recently received ethics clearance to begin an explorative study looking at how physiotherapists think about the introduction of machine learning into clinical practice. The study will use an international survey and a series of interviews to gather data on clinicians’ perspectives on questions like the following:

  • What aspects of clinical practice are vulnerable to automation?
  • How do we think about trust when it comes to AI-based clinical decision support?
  • What is the role of the clinician in guiding the development of AI in clinical practice?

I’m busy finalising the questionnaire and hope to have the survey up and running in a couple of weeks, with more focused interviews following. If these kinds of questions interest you and you’d like to have a say in answering them, keep an eye out for a call to respond.

Here is the study abstract (contact me if you’d like more detailed information):

Background: Artificial intelligence (AI) is a branch of computer science that aims to embed intelligent behaviour into software in order to achieve certain objectives. Increasingly, AI is being integrated into a variety of healthcare and clinical applications and there is significant research and funding being directed at improving the performance of these systems in clinical practice. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyse and interpret the complex interactions of data, patients and clinical decision-making.

Aim: In order to ensure that we successfully integrate machine intelligence with the essential human characteristics of empathic, caring and creative clinical practice, we need to first understand how clinicians perceive the introduction of AI into professional practice.

Methods: This study will make use of an explorative design to gather qualitative data via an online survey and a series of interviews with physiotherapy clinicians from around the world. The survey questionnaire will be self-administered and piloted for validity and ambiguity, and the interview guide informed by the study aim. The population for both survey and interviews will consist of physiotherapy clinicians from around the world. This is an explorative study with a convenient sample, therefore no a priori sample size will be calculated.

Categories
AI ethics

First compute no harm

Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, its users, and the way in which both interact in the (socio-technical) health system.

Source: Enrico Coiera et al. (2017). First compute no harm. BMJ Opinion.

The article goes on to list some of the guiding principles for the development of AI in healthcare, including the following:

  • AI must be designed and built to meet safety standards that ensure it is fit for purpose and operates as intended.
  • AI must be designed for the needs of those who will work with it, and fit their workflows.
  • Humans must have the right to challenge an AI’s decision if they believe it to be in error.
  • Humans should not direct AIs to perform beyond the bounds of their design or delegated authority.
  • Humans should recognize that their own performance is altered when working with AI.
  • If humans are responsible for an outcome, they should be obliged to remain vigilant, even after they have delegated tasks to an AI.

The principles listed above are only a very short summary. If you’re interested in the topic of ethical decision making in clinical practice, you should read the whole thing.

Categories
AI clinical

The fate of medicine in the time of AI

Source: Coiera, E. (2018). The fate of medicine in the time of AI.

The challenges of real-world implementation alone mean that we probably will see little change to clinical practice from AI in the next 5 years. We should certainly see changes in 10 years, and there is a real prospect of massive change in 20 years. [1]

This means that students entering health professions education today are likely to begin seeing the impact of AI in clinical practice when they graduate, and very likely to see significant changes 3-5 into their practice after graduating. Regardless of what progress is made between now and then, the students we’re teaching today will certainly be practising in a clinical environment that is very different from the one we prepared them for.

Coiera offers the following suggestions for how clinical education should probably be adapted:

  • Include a solid foundation in the statistical and psychological science of clinical reasoning.
  • Develop models of shared decision-making that include patients’ intelligent agents as partners in the process.
  • Clinicians will have a greater role to play in patient safety as new risks emerge e.g. automation bias.
  • Clinicians must be active participants in the development of new models of care that will become possible with AI.

We should also recognise that there is still a lot that is unknown with respect to where, when and how these disruptions will occur. Coiera suggests that the best guesses we can make about predicting the changes that are likely to happen should probably focus on those aspects of practice that are routine because this is where AI research will focus. As educators, we should work with clinicians to identify those areas of clinical practice that are most likely to be disrupted by AI-based technologies and then determine how education needs to change in response.

The prospect of AI is a Rorschach blot upon which many transfer their technological dreams or anxieties.

Finally, it’s also useful to consider that we will see in AI our own hopes and fears and that these biases are likely to inform the way we think about the potential benefits and dangers of AI. For this reason, we should include as diverse a group as possible in the discussion of how this technology should be integrated into practice.


[1] The quote from the article is based on Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Categories
AI clinical education

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.