Categories
ethics reading

PSA: Peter Singer’s “The life you can save” is available for free

In 2009, Peter Singer wrote the first edition of The Life You Can Save to demonstrate why we should care about and help those living in global extreme poverty, and how easy it is to improve and even save lives by giving effectively.

This morning I listened to an 80 000 hours podcast with Peter Singer and learned that, on the 10th anniversary of its publication, his book, The life you can save, is now available as a free ebook and audiobook (you can get the audiobook as a podcast subscription, which is very convenient). Singer’s ideas in this book, and Practical ethics, have been hugely influential in my thinking and teaching and thought that more people might be interested in the ideas that he shares.

Click on the image below to get to the download page.

Categories
AI research

#APaperADay – The Last Mile: Where Artificial Intelligence Meets Reality

“…implementation should be seen as an agile, iterative, and lightweight process of obtaining training data, developing
algorithms, and crafting these into tools and workflows.”

Coiera, E. (2019). The Last Mile: Where Artificial Intelligence Meets Reality. Journal of Medical Internet Research, 21(11), e16323. https://doi.org/10.2196/16323

A short article (2 pages of text) describing the challenges of building AI systems without understanding that technological solutions are only relevant when they solve real world problems that we care about, and when they are built within the systems that they will ultimately be used in.

Note: I found it hard not to just rewrite the whole paper because I really like the way Coiera writes and find that his economy with words makes it hard to cut things out i.e. I think that it’s all important text. I tried to address this by making my notes without looking at the original article, and then going back over the notes and rewriting them.


Technology shapes us as we shape it. Humans and machines form a sociotechnical system.

The application of technology should be shaped by the problem at hand and not the technology itself. But we see the opposite of this today, with companies building technologies that are then used to solve “problems” that no-one thought were problems. Most social media fits this description.

Technological innovations may create new classes of solution but it’s only in the real world that we see what problems are worth addressing and what solutions are most appropriate. Just because a technology is presented as a solution it’s up to us to make choices about whether the solution is the best solution, or whether the problem is important.

There are two broad research agendas for AI:

  1. The technical aspects of building machine intelligence.
  2. The application of machine intelligence to real world problems that we care about.

In our drive to accelerate progress in the first area, we may lose sight of the second. For example, even though image recognition is developing very quickly the use of image recognition systems has had little clinical impact to date. In some cases, it may even make clinical outcomes worse. For example when the overdiagnosis of a condition causes an increase in management (and associated costs and exposure to harm), even though treatment options remain unchanged.

There are three stages of development with data-driven technologies like AI-based systems:

  1. Data are acquired, labelled and cleaned.
  2. Building and testing technical performance in controlled environments.
  3. Algorithms are applied in real world contexts.

It’s only really in the last stage where it’s clear that “AI does nothing on its own” i.e. all technology is embedded in the sociotechnical systems mentioned earlier and are intricately connected to people and the choices that people make. This makes sociotechnical systems messy and complex, and therefore immune to the “solutions” touted by tecnology companies.

Some of the “last mile” challenges of AI implementation include:

  1. Measurement: We use standard metrics of AI performance to show improvement. But these metrics are often only useful in controlled experiments and are divorced from the practical realities of implementation in the clinical context.
  2. Generalisation and calibration: AI systems are trained on historical data and so future performance of the algorithm is dependent on how well the historical data matches the new context.
  3. Local context: The complexity of interacting variables within local contexts mean that any system will have to be fine-tuned to the organisation in which it is embedded. Organisations also change over time, meaning that the AI will need to be adjusted as well.

The author also provides possible solutions to these challenges.

Software development has moved from a linear process to an iterative model where systems are developed in situ through interaction with users in the real world. Google, Facebook, Amazon, etc. do this all the time by exposing small subsets of users to changes in the platform, and then measuring differences in engagement using metrics that the platforms care about (time spent on Facebook, or number of clicks on ads).

In healthcare we’ll need to build systems in which AI-based technologies are implemented, not as completed solutions, but with the understanding that they will need refinement and adaptation through iterative use in complex, local contexts. Ideally, they will be built within the systems they are going to be used in.

Categories
conference education physiotherapy

Comment: Science conferences are stuck in the dark ages

…for decades the room has been the same: four walls, a podium, and a projector. PowerPoints today mimic the effect of a centuries-old continuous-slide lantern. Even when time is occasionally left for questions at the end of lectures, it’s still a distinctly one-way flow of information. Scientific posters are similarly archaic.

Ngumbi, E. & Lovett, B. (2019). Science Conferences Are Stuck in the Dark Ages. Wired magazine.

Anyone who’s gone to an academic conference and reflected on it for more than a moment usually arrives at the conclusion that the experience is distinctly underwhelming. I’m not going to go into the details of why since Ben and I discussed it at length in our reflection on WCPT and the Unposter on the podcast, but the general idea is that most conferences suck because of the format.

And this is why you really need to think about coming to the second In Beta unconference on physiotherapy education at HAN in the Netherlands on the 14th and 15th of September 2020. The unconference will take place soon after the ENPHE/ER-WCPT conference, so if you’re attending that meeting then it’s a no-brainer to stay on for a few days and come to Nijmegen for something quite different. Click on the image below for more information.

Categories
reading research

#APaperADay: It’s Time for Medical Schools to Introduce Climate Change Into Their Curricula

This is my first attempt to share a short summary of a paper that I’ve read as part of my #APaperADay project, where I try to put aside the last 30-60 minutes of every day for reading and summarising an article. Obviously, I’m not going to be able to finish an article a day so these won’t be daily posts.

Also, paper selection is likely to be arbitrary. This isn’t an attempt to find “the best” or “most interesting” articles. It’s probably just me going through my reading list and choosing something based on how much time I have left in the day.

I’m going to try and make these summaries short and may also start adding my own commentary within the main text as part of an attempt to engage more deeply with the subject. Please don’t assume that my summaries are 1) accurate representations of the actual content, 2) substitutes for reading the original, 3) appropriate sources of knowledge in their own right.


Citation: Wellbery, C., Sheffield, P., Timmireddy, K., Sarfaty, M., Teherani, A., & Fallar, R. (2018). It’s Time for Medical Schools to Introduce Climate Change Into Their Curricula. Academic Medicine, 93(12), 1774–1777. https://doi.org/10.1097/ACM.0000000000002368

This is a position piece that begins by describing the impact of human beings on the planet (the Anthropocene).

The effects of climate change will disproportionately affect the most vulnerable populations (the very old and very young, those who are sick, and whose who are poor).

Current efforts in HPE policy have been directed towards preparing health professionals to help address the effects of climate change. However, medical school curricula have not made much headway in updating their curricula to explicitly include this new content.

Rationale for including climate change in medical education

  1. Today’s generation of HP students are those who have a large stake in developing a strategic response.
  2. The health effects of climate change and getting worse and HP will need to be adequately prepared to meet with challenge.
  3. It is everyone’s responsibility to drive efforts at reducing the environmental footprint of healthcare, which is a non-trivial contributor to global warming.
  4. Climate change will disproportionately affect the most vulnerable populations, who HP are obliged to help.
  5. The inclusion of climate change will facilitate the development of thinking skills that are (hopefully) transferable to other aspects of the curriculum.

Current curricular interventions

There needs to be a rethinking of the division between public and individual health. Climate change will increasingly affect the environment, which will increasingly affect people. These complex interactions among complex variables will affect political, social, scientific, and economic domains, all of which are currently beyond the scope of medical education.

Climate change as a topic of discussion can be relatively easily integrated into medical curricula, alongside already existing conditions. For example, a discussion on asthma could include the negative effect of global warming on this particular condition. In other words, climate change need not be included as a separate module/subject/topic but could be integrated with the current curriculum.

“Climate-relevant examples and the overarching macrocosmic mechanisms linking them to individual disease processes could broaden discussions of such topics as cardiovascular health (related to changing air quality), sexually transmitted infections (related to displaced populations), and mental health disorders (related both to displaced populations and also to extreme weather).”

The article finishes with a few examples of how some medical schools have incorporated climate change into their curricula. It seems likely that this is something that will need to happen over time i.e. programmes can’t simply dump a load of “global warming/climate change” content into the curriculum overnight.

Comment: This is a short paper that might be interesting for someone who’d like to know why climate change should be a topic of interest in health professions education. Even if this is something that you’re only passingly familiar with, you’re probably not going to get much from it. But it may be useful to pass on to someone who thinks that climate change isn’t relevant in a health professions curriculum.


Categories
personal

A review of 2019 and plans for 2020

One year ago today I posted some of the plans that I had for the year and this is a brief review of those plans, as well as starting to think about what I might get into for 2020.

Writing: I managed to stick to my goal of writing every day with the caveat that I obviously can’t always write as much as I’d like to every day. I did manage to carve out about 2 hours, for at least 3 days a week, which I used to write papers, blog posts and provide feedback for postgraduate students. I’ll keep to that plan of putting aside a few hours every morning for 2020 although I’m hoping to spend more of that time on more informal writing rather than putting out more research articles.

Research and exchange: I visited Oslo for 2 weeks in August with a small group of undergraduate students and a colleague from my department, as part of a research project on internationalisation. This year, we’ll host 4 students and 2 lecturers from OsloMet in my department in March. This visit was a fantastic experience – for lecturers and students – and I cannot wait for our Norwegian colleagues to come to Cape Town. It wasn’t just a brilliant academic experience but was so wonderful to spend some time wondering around the parks in Oslo.

Sognsvann Lake outside of Oslo.

I also completed a survey on the perceptions of physiotherapy clinicians on the impact of AI on clinical practice, and hope to complete the associated interviews early this year. And finally, I’ve made the decision not to apply to have my NRF rating re-evaluated, since I realised that I was spending more time than I was happy with simply dealing with the admin of being rated. And with the added pressure to keep meeting benchmarks in an already hyper-competative field I decided that it was serving as little more than an unwanted distraction from the things that really brought me a lot of joy in my professional work in 2019. Speaking of which…

In Beta: The first In Beta unconference was held in May in Lausanne and was incredible; definitely one of the highlights of 2019 for me. Thanks so much to Guillaume Christe and Veronika Schoeb for all of their assistance, not only in making it possible but in making it awesome. I’m super excited to be working with Ben and Joost to prepare the second unconference at HAN in the Netherlands from 14-15 September.

We didn’t get to publish as many podcasts in 2019 as I’d hoped (although we did end up recording quite a few) because of the significant time it takes to edit them. On the other hand we did start the In Beta monthly newsletter which has been a fun little experiment. For 2020 we’ll keep working on the podcasts (the first discussion is scheduled for the 21st of January) and the newsletter (here’s the January edition). There’s also a rumour of an In Beta Introduction to Physiotherapy Education open access book but that can’t be true because we’d never take on anything so unrealistic and unlikely to materialise. Surely? Oh, and Ben started an In Beta Twitter account so you should probably check that out.

Technology: I’ve done more in 2019 to move myself away from social media in general, as well as finding alternatives to Google products (this decision needs a whole post to explore). My main goal is to try and get off of closed platforms and try to use open source software where possible. I’ve also been experimenting with some Indieweb applications and ideas, which has been fun. I’ll probably write a few updates on this process during the year.

365 project: I managed to take a photo a day for 346 days in 2019, which wasn’t too bad. Here are some of my favourites.

I won’t continue with the photo a day project this year but will be trying to work on other projects “with my hands”. What this means exactly is yet to be determined but I think it’ll most likely involve making/building/creating things. For example, I’m going to start sketching (see this early attempt at a gecko).

And also restoring some of the old furniture we’ve had in the house for years, like this wardrobe that my mother-in-law had when she was a child.

Exercise: I didn’t get to do as much cycling as I’d hoped for but I did start trail running in September. I’ve gone running for 3 days of almost every week since I first went out and have found it to be…interesting. I won’t say that I enjoy it but I haven’t stopped doing it so there’s that. I basically just want to make it so that I’m less likely to have a heart attack when I’m 45.

Trail running around the back of Lion’s Head.

Reading: I read 36 books in 2019 and 3 million words in Pocket (which it estimates is the equivalent of about 40 books but who knows how they come up with that). While I’m fairly happy with how much I’m reading I’m pretty depressed with how little I remember. Which is why I’m going to try and read fewer books/words in 2020 and pay more attention to how I engage with what I’m reading. As it is I read all of my books in the Moon+ reader app and export all highlights and notes into Joplin. But then I don’t take the next step of reviewing, editing and connecting those notes to other concepts in order to extract what is most useful from what I’ve read. I’ll be doing more of that kind of “post-reading” work in 2020. I also didn’t read as many research papers in 2019 as I would’ve liked but then again, I don’t want to fall into the trap of reading articles just so that I can tick them off. With that in mind I’m going to try and read a paper a day during the last hour of work during the week and then post short summaries here (note that reading a paper a day isn’t the same thing as finishing a paper a day, so at least I’ve got some breathing space there).

Productivity: I did pretty well with my plan to restructure my day at work so that I don’t have to do anything in the evenings and on weekends. I think that there may have been a total of a few weeks in the year when I had to do some work at night and it was usually the result of an influx of urgent tasks from others rather than an inability to manage my time effectively. I spent a lot of time refining my workflow and I think that that alone helped me to stay focused and get stuff done. I managed to stick to a fairly regular meditation schedule for about half the year but then got out of the habit and didn’t pick it up again. I’ll probably make another atempt this year.

That’s about it from my side. I hope that you have a great 2020.

Categories
education technology

Podcast: Are the kids alright?

In this anxious era of bullying, teen depression, and school shootings, tech companies are selling software to schools and parents that make big promises about keeping kids secure by monitoring what they say and write online. But these apps demand disturbing trade-offs in the name of safety.

This is a great episode of the Rework podcast looking at the dangers of using increasingly sophisticated technology in schools as part of programmes to “protect” children. What they really amount to are very superficial surveillance systems that can do a lot less than what the venture-backed companies say they can. If you’re a teacher or if you have kids at a school using these systems, this is a topic worth learning more about.

The show notes include a ton of links to excellent resources and also a complete transcript of the episode.

Categories
AI clinical

Comment: Computer vision is far from solved.

You could argue that because these pictures are designed to fool AI, it’s not exactly a fair fight. But it’s surely better to understand the weaknesses of these systems before we put our trust in them.

Vincent, J. (2019). The mind-bending confusion of ‘hammer on a bed’ shows computer vision is far from solved. The Verge.

This is an important issue to be aware of…the published studies on how AI is vastly superior to human perception may be true only in very narrow, tightly controlled situations. If we’re not aware of that we may be willing to place too much trust in systems that are fundamentally biased or inaccurate when it comes to performance in the real world.

For example, consider decision-making in expert systems (something like IBMs Watson) where the system is trained on retrospective data, usually from places where they have a lot of data. This might translate into the system making suggestions for patient management based on what has been done in the past, in circumstances that are completely different to the current context. If I’m a family practitioner practising in rural South Africa, it may not be that useful to know what an expert oncologist in Boston would have done in a similar situation.

It’s unlikely that the management options provided by the system are feasible for implementation because of differences in people, culture, language, society, health systems, etc. But unless I know that the data my expert system was trained on is contextually flawed, I may simply go ahead and then have no idea why it fails. It’s important to test AI systems in situations where we know they’ll break before we roll them out in the real world.

Categories
AI clinical

Comment: The danger of AI is weirder than you think.

AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they’re optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don’t have any concept of what this content actually is, and they don’t have any concept of what the consequences might be of recommending this content.

Shane, J. (2019). The danger of AI is weirder than you think. TED.

We don’t need to worry about AI that is conscious (yet), only that it is competent and that we’ve given it a poorly considered problem to solve. When we think about the solution space for AI-based systems we need to be aware that the “correct” solution for the algorithm is one that literally solves the problem, regardless of the method.

The danger of AI isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to.

Janelle Shane

This matters in almost every context we care about. Consider the following scenario. ICUs are very expensive for a lot of good reasons; they have a very specialised workforce, a very low staff to patient ratio, the time spent with each patient is very high, and the medication is crazy expensive. We might reasonably ask an AI to reduce the cost of running an ICU, thinking that it could help to develop more efficient workflows, for example. But the algorithm might come to the conclusion that the most cost-effective solution is to kill all the patients. According to the problem we proposed, this isn’t incorrect but it’s clearly not what we were looking for, and any human being on earth, including small children, will understand why.

Before we can ask AI-based systems to help solve problems we care about, we’ll need to first develop a language for communicating with them. A language that includes the common sense parameters that inherently bound all human-human conversation. When I ask a taxi driver to take me to the airport “as quickly as possible”, I don’t also need to specify that we shouldn’t break any rules of driving, and that I’d like to arrive alive. We both understand the boundaries that define the limits of my request. As the video above shows, an AI doesn’t have any “common sense” and this is a major obstacle for progress towards having AI that can address real world problems beyond the narrow contexts where they are currently competent.

Categories
education learning

Comment: The game of school.

Schools are about learning, but it’s mostly learning how to play the game. At some level, even though we like to talk about schools as though they are about learning in some pure, liberal-arts sense, on a pragmatic level we know that what we’re really teaching students is to get done the things that they are asked to do, to get them done on time, and to get them done with as few mistakes as possible.

I think the danger comes from believing that those who by chance, genetics, temperament, family support, or cultural background find the game easier to play are actually somehow inherently betteror have more human value than the other students.

The students who aren’t succeeding usually don’t have any idea that school is a game. Since we tell them it’s about learning, when they fail they then internalize the belief that they themselves are actual failures–that they are not good learners. And we tell ourselves some things to feel OK about this taking place: that some kids are smart and some are not, that the top students will always rise to the top, that their behavior is not the result of the system but that is their own fault.

Hargadon, S. (2019). The game of school. Steve Hargadon blog: The learning revoluation has begun.

I thought that this was an interesting post with a few ideas that helped me to think more carefully about my own teaching. I’ve pulled out a few of the sentences from the post that really resonated with me but there are plenty more. Once you accept the idea that school (and university) is a game, it all makes a lot more sense; ranking students in leaderboards, passing and failing (as in quests or missions), levelling up, etc.

The author also then goes on to present 4 hierarchical “levels” of learning that really describe frameworks or paradigms rather than any real description of learning (i.e. the categores and names of the levels in the hierarchy are to some extent, arbitrary; it’s the descriptions in each level that count).

If I think about our own physiotherapy programme, we use all 4 “levels” interchangeably and have varying degrees of each of them scattered throughout the curriculum. However, I’d say that the bulk of our approach happens at the lowest level of Schooling, some at Training, a little at Education, and almost none at Self-regulated learning. While we pay lip service to the fact that we “offer opportunities for self-regulated learning”, what it really boils down to is that we give students reading to do outside of class time.

Categories
Publication

Article: Predatory journals: No definition, no defense.

Everyone agrees that predatory publishers sow confusion, promote shoddy scholarship and waste resources. What is needed is consensus on a definition of predatory journals. This would provide a reference point for research into their prevalence and influence, and would help in crafting coherent interventions.

Grudniewicz, A. (2019). Predatory journals: no definition, no defence. Nature, 576, 210-212, doi: 10.1038/d41586-019-03759-y.
There exist a variety of checklists to determine if a journal is widely recognised as being “predatory” but the challenge is that few lists are consistent and some are overlapping, which is not helpful for authors.

The consensus definition reached by the authors of the paper:

Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.

Further details of the main concepts in the definition are included in the article.


Note: Some parts of this article were cross-posted at OpenPhysio, an open-access, peer-reviewed online journal with a focus on physiotherapy education. If you’re doing interesting work in the classroom, even if you have no experience in publishing educational research, we’d like to help you share your stories.