UCT seminar: Shaping our algorithms

Tomorrow I’ll be presenting a short seminar at the University of Cape Town on a book chapter that was published earlier this year, called Shaping our algorithms before they shape us. Here are the slides I’ll be using, which I think are a useful summary of the chapter itself.

This slideshow requires JavaScript.

Survey: Physiotherapy clinicians’ perceptions of artificial intelligence in clinical practice

We know very little about how physiotherapy clinicians think about the impact of AI-based systems on clinical practice, or how these systems will influence human relationships and professional practice. As a result, we cannot prepare for the changes that are coming to clinical practice and physiotherapy education. The aim of this study is to explore how physiotherapists currently think about the potential impact of artificial intelligence on their own clinical practice.

Earlier this year I registered a project that aims to develop a better understanding of how physiotherapists think about the impact of artificial intelligence in clinical practice. Now I’m ready to move forward with the first phase of the study, which is an online survey of physiotherapy clinicians’ perceptions of AI in professional practice. The second phase will be a series of follow up interviews with survey participants who’d like to discuss the topic in more depth.

I’d like to get as many participants as possible (obviously) so would really appreciate it if you could share the link to the survey with anyone you think might be interested. There are 12 open-ended questions split into 3 sections, with a fourth section for demographic information. Participants don’t need a detailed understanding of artificial intelligence and (I think) I’ve provided enough context to make the questionnaire simple for anyone to complete in about 20 minutes.

Here is a link to the questionnaire: https://forms.gle/HWwX4v7vXyFgMSVLA.

This project has received ethics clearance from the University of the Western Cape (project number: BM/19/3/3).

Comment: How do we learn to work with intelligent machines?

I discussed something related to this earlier this year (the algorithmic de-skilling of clinicians) and thought that this short presentation added something extra. It’s not just that AI and machine learning have the potential to create scenarios in which qualified clinical experts become de-skilled over time; they will also impact on our ability to teach and learn those skills in the first place.

We’re used to the idea of a novice working closely with a more experienced clinician, and learning from them through observation and questioning (how closely this maps onto reality is a different story). When the tasks usually performed by more experienced clinicians are outsourced to algorithms, who does the novice learn from?

Will clinical supervision consist of talking undergraduate students through the algorithmic decision-making process? Discussing how probabilistic outputs were determined from limited datasets? How to interpret confidence levels of clinical decision-support systems? When clinical decisions are made by AI-based systems in the real-world of clinical practice, what will we lose in the undergraduate clinical programme, and how do we plan on addressing it?

Book chapter published: Shaping our algorithms before they shape us

I’ve just had a chapter published in an edited collection entitled: Artificial Intelligence and Inclusive Education: Speculative Futures and Emerging Practices. The book is edited by Jeremy Knox, Yuchen Wang and Michael Gallagher and is available here.

Here’s the citation: Rowe M. (2019) Shaping Our Algorithms Before They Shape Us. In: Knox J., Wang Y., Gallagher M. (eds) Artificial Intelligence and Inclusive Education. Perspectives on Rethinking and Reforming Education. Springer, Singapore. https://doi.org/10.1007/978-981-13-8161-4_9.

And here’s my abstract:

A common refrain among teachers is that they cannot be replaced by intelligent machines because of the essential human element that lies at the centre of teaching and learning. While it is true that there are some aspects of the teacher-student relationship that may ultimately present insurmountable obstacles to the complete automation of teaching, there are important gaps in practice where artificial intelligence (AI) will inevitably find room to move. Machine learning is the branch of AI research that uses algorithms to find statistical correlations between variables that may or may not be known to the researchers. The implications of this are profound and are leading to significant progress being made in natural language processing, computer vision, navigation and planning. But machine learning is not all-powerful, and there are important technical limitations that will constrain the extent of its use and promotion in education, provided that teachers are aware of these limitations and are included in the process of shepherding the technology into practice. This has always been important but when a technology has the potential of AI we would do well to ensure that teachers are intentionally included in the design, development, implementation and evaluation of AI-based systems in education.

Comment: Could robots make us better humans?

This is one of his arguments for listening to AI-generated music, studying how computers do maths and…gazing at digitally produced paintings: to understand how advanced machines work at the deepest level, in order to make sure we know everything about the technology that is now built into our lives.

Harris, J. (2019). Could robots make us better humans? The Guardian.

Putting aside the heading that conflates “robots” with “AI” there are several insightful points worth noting in this Guardian interview with Oxford-based mathematician and musician, Marcus du Sautoy. I think it’ll be easiest if I just work through the article and highlight them in the order that they appear.

1. “My PhD students seem to have to spend three years just getting to the point where they understand what’s being asked of them…”: It’s getting increasingly difficult to make advances in a variety of research domains. The low-hanging fruit has been picked and it subsequently takes longer and longer to get to the forefront of knowledge in any particular area. At some point, making progress in any scientific endeavor is going to require so much expertise that no single human being will be able to contribute much to the overall problem.

2. “I have found myself wondering, with the onslaught of new developments in AI, if the job of mathematician will still be available to humans in decades to come. Mathematics is a subject of numbers and logic. Isn’t that what computers do best?”: On top of this, we also need to contend with the idea that advances in AI seem to indicate that some of these systems are able to develop innovations in what we might consider to be deeply human pursuits. Whether we call this creativity or something else, it’s clear that AI-based systems are providing earlier insights into problems that we may have eventually arrived at ourselves, albeit at some distant point in the future.

3. “I think human laziness is a really important part of finding good, new ways to do things…”: Even in domains of knowledge that seem to be dominated by computation, there is hope in the idea that working together, we may be able to develop new solutions to complex problems. Human beings often look for shortcuts when faced with inefficiency or boredom, something that AI-based systems are unlikely to do because they can simply brute force their way through the problem. Perhaps a combination of a human desire to take the path of least resistance, combined with the massive computational resources that an AI could bring to bear, would result in a solution that’s beyond the capacity of either working in isolation.

4. “Whenever I talk about maths and music, people get very angry because they think I’m trying to take the emotion out of it…”: Du Sautoy suggests that what we’re responding to in creative works of art isn’t an innately emotional thing. Rather, there’s a mathematical structure that we recognise first, and the emotion comes later. If that’s true, then there really is nothing in the way of AI-based systems not only creating beautiful art (they already do that) but of creating art that moves us.

5. “We often behave too like machines. We get stuck. I’m probably stuck in my ways of thinking about mathematical problems”: If it’s true that AI-based systems may open us up to new ways of thinking about problems, we may find that working in collaboration with them makes us – perhaps counterintuitively – more human. If we keep asking what it is that makes us human, and let machines take on the tasks that don’t fit into that model, it may provide space for us to expand and develop those things that we believe make us unique. Rather than competing on computation and reason, what if we left those things to machines, and looked instead to find other ways of valuing human capacity?

Comment: Lessons learned building natural language processing systems in health care

Many people make the mistake of assuming that clinical notes are written in English. That happens because that’s how doctors will answer if you ask them what language they use.

Talby, D. (2019). Lessons learned building natural language processing systems in health care. O’Reilly.

This is an interesting post making the point that medical language – especially when written in clinical notes – is not the same as other, more typical, human languages. This is important to recognise in the context of training natural language processing (NLP) models in the healthcare context because medical languages have different vocabularies, grammatical structure, and semantics. Trying to get an NLP system to “understand”* medical language is a fundamentally different problem to understanding other languages.

The lessons from this article are slightly technical (although not difficult to follow) and do a good job highlighting why NLP in health systems is seeing slower progress than the NLP running on your phone. You may think that, since Google Translate does quite well translating between English and Spanish, for example, it should also be able to translate between English and “Radiography”. This article explains why that problem is not only harder than “normal” translation, but also different.

* Note: I’m saying “understand” while recognising that current NLP systems understand nothing. They’re statistically modelling the likelihood that certain words follow certain other words and have no concept of what those words mean.

Comment: Training a single AI model can emit as much carbon as five cars in their lifetimes

The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research. “This trend toward training huge models on tons of data is not feasible for academics…because we don’t have the computational resources. So there’s an issue of equitable access between researchers in academia versus researchers in industry.”

Hao, K. (2019). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review.

The article focuses on the scale of the financial and environmental cost of training natural language processing (NLP) models, comparing the carbon emissions of various AI models to those of a car throughout its lifetime. To be honest, this isn’t something I’ve given much thought to but to see it visually really drives the point home.

As much as this is a cause for concern, I’m less worried about this in the long term for the following reason. As the author’s in the article stake, the code and models for AI and NLP are currently really inefficient; they don’t need to be neat and compute is relatively easy to come by (if you’re Google and Facebook). I think that the models will get more efficient, as is evident by the fact that new computer vision algorithms can get to the same outcomes with datasets that are orders of magnitude smaller than was previously possible.

For me though, the quote that I’ve pulled from the article to start this post is more compelling. If the costs of modeling NLP are so high, it seems likely that companies like Google, Facebook and Amazon will be the only ones who can do the high end research necessary to drive the field forward. Academics at universities have an incentive to create more efficient models, which they publish and which then allow companies to take advantage of those new models while at the same time, having access to much more computational resources.

From where I’m standing this makes it seem that private companies will always be at the forefront of AI development, which makes me less optimistic than if it were driven by academics. Maybe I’m just being naive (and probably also biased) but this seems less than ideal.

You can find the full paper here on arxiv.

Comment: ‘Robots’ Are Not ‘Coming for Your Job’—Management Is

…in practice, ‘the robots are coming for our jobs’ usually means something more like ‘a CEO wants to cut his operating budget by 15 percent and was just pitched on enterprise software that promises to do the work currently done by thirty employees in accounts payable.’

Merchant, B. (2019). ‘Robots’ Are Not ‘Coming for Your Job’—Management Is. Gizmodo.

It’s important to understand that “technological progress” is not an inexorable march towards an inevitable conclusion that we are somehow powerless to change. We – people – make decisions that influence where we’re going and to some extent, where we end up is evidence of what we value as a society.

Comment: Scientists teach computers fear—to make them better drivers

The scientists placed sensors on people’s fingers to record pulse amplitude while they were in a driving simulator, as a measure of arousal. An algorithm used those recordings to learn to predict an average person’s pulse amplitude at each moment on the course. It then used those “fear” signals as a guide while learning to drive through the virtual world: If a human would be scared here, it might muse, “I’m doing something wrong.”

Hutson, M. (2019). Scientists teach computers fear—to make them better drivers. Science magazine.

This makes intuitive sense; algorithms have no idea what humans fear, nor even what “fear” is. This project takes human flight-or-flight physiological data and uses it to train an autonomous driving algorithm to get a sense of what we feel when we face anxiety-producing situations. The system can use those fear signals to more quickly identify when they’re moving into dangerous territory, adjusting their behaviour to be less risky.

There are interesting potential use cases in healthcare; surgery, for example. When training algorithms on simulations or games, errors do not lead to high-stakes consequences. However, when trusting machines to make potentially life-threatening choices, we’d like them to be more circumspect and risk-averse. But one of the challenges is to get them to identify situations in which a human’s perception of risk is included in the decision-making process. Learning that cutting this artery will likely lead to death can be done by cutting that artery hundreds of times (in simulations) and noting the outcome. This gives us a process whereby the algorithm “senses” a fear response in a surgeon before cutting the artery, and possibly sending a signal indicating that they should slow down and call for help. This could help when deciding whether or not surgical machines should have greater autonomy when performing surgery because we could have mroe confidence that they’d ask for human intervention at appropriate times.

SAAHE podcast on building a career in HPE

In addition to the In Beta podcast that I host with Ben Ellis (@bendotellis), I’m also involved with a podcast series on health professions education with the South African Association of Health Educators (SAAHE). I’ve just published a conversation with Vanessa Burch, one of the leading South African scholars in this area.

You can listen to this conversation (and earlier ones) by searching for “SAAHE” in your podcast app, subscribing and then downloading the episode. Alternatively, listen online at http://saahe.org.za/2019/06/8-building-a-career-in-hpe-with-vanessa-burch/.

In this wide-ranging conversation, Vanessa and I discuss her 25 years in health professions education and research. We look at the changes that have taken place in the domain over the past 5-10 years and how this has impacted the opportunities available for South African health professions educators in the early stages of their careers. We talk about developing the confidence to approach people you may want to work with, from the days when you had to be physically present at a conference workshop, to explore novel ways to connect with colleagues in a networked world. We discuss Vanessa’s role in establishing the Southern African FAIMER Regional Institute (SAFRI), as well as the African Journal of Health Professions Education (AJHPE) and what we might consider when presented with opportunities to drive change in the profession.

Vanessa has a National Excellence in Teaching and Learning Award from the Council of Higher Education and the Higher Education Learning and Teaching Association of South Africa (HELTASA), and holds a Teaching at University (TAU) fellowship from the Council for Higher Education of South Africa. She is a Deputy Editor at the journal Medical Education, and Associate Editor of Advances in Health Sciences Education. Vanessa was Professor and Chair of Clinical Medicine at the University of Cape Town from 2008-2018in health and is currently Honorary Professor of Medicine at UCT. She works as an educational consultant to the Colleges of Medicine of South Africa.