Categories
AI clinical

Article published – An introduction to machine learning for clinicians

It’s a nice coincidence that my article on machine learning for clinicians has been published at around the same time that my poster on a similar topic was presented at WCPT. I’m quite happy with this paper and think it offers a useful overview of the topic of machine learning that is specific to clinical practice and which will help clinicians understand what is at times a confusing topic. The mainstream media (and, to be honest, many academics) conflate a wide variety of terms when they talk about artificial intelligence, and this paper goes some way towards providing some background information for anyone interested in how this will affect clinical work. You can download the preprint here.


Abstract

The technology at the heart of the most innovative progress in health care artificial intelligence (AI) is in a sub-domain called machine learning (ML), which describes the use of software algorithms to identify patterns in very large data sets. ML has driven much of the progress of health care AI over the past five years, demonstrating impressive results in clinical decision support, patient monitoring and coaching, surgical assistance, patient care, and systems management. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of intelligent machines to analyze and interpret the complex interactions between data, patients, and clinical decision-makers. However, as this technology becomes more powerful it also becomes less transparent, and algorithmic decisions are therefore increasingly opaque. This is problematic because computers will increasingly be asked for answers to clinical questions that have no single right answer, are open-ended, subjective, and value-laden. As ML continues to make important contributions in a variety of clinical domains, clinicians will need to have a deeper understanding of the design, implementation, and evaluation of ML to ensure that current health care is not overly influenced by the agenda of technology entrepreneurs and venture capitalists. The aim of this article is to provide a non-technical introduction to the concept of ML in the context of health care, the challenges that arise, and the resulting implications for clinicians.

Categories
AI clinical

WCPT poster: Introduction to machine learning in healthcare

It’s a bit content-heavy and not as graphic-y as I’d like but c’est la vie.

I’m quite proud of what I think is a novel innovation in poster design; the addition of the tl;dr column before the findings. In other words, if you only have 30 seconds to look at the poster then that’s the bit you want to focus on. Related to this, I’ve also moved the Background, Methods and Conclusion sections to the bottom and made them smaller so as to emphasise the Findings, which are placed first.

Here is the tl;dr version. Or, my poster in 8 tweets:

  • Aim: The aim of the study was to identify the ways in which machine learning algorithms are being used across the health sector that may impact physiotherapy practice.
  • Image recognition: Millions of patient scans can be analysed in seconds, and diagnoses made by non-specialists via mobile phones, with lower rates of error than humans are capable of.
  • Video analysis: Constant video surveillance of patients will alert providers of those at risk of falling, as well as make early diagnoses of movement-related disorders.
  • Natural language processing: Unstructured, freeform clinical notes will be converted into structured data that can be analysed, leading to increased accuracy in data capture and diagnosis.
  • Robotics: Autonomous robots will assist with physical tasks like patient transportation and possibly even take over manual therapy tasks from clinicians.
  • Expert systems: Knowing things about conditions will become less important than knowing when to trust outputs from clinical decision support systems.
  • Prediction: Clinicians should learn how to integrate the predictions of machine learning algorithms with human values in order to make better clinical decisions in partnership with AI-based systems.
  • Conclusion: The challenge we face is to bring together computers and humans in ways that enhance human well-being, augment human ability and expand human capacity.
My full-size poster on machine learning in healthcare for the 2019 WCPT conference in Geneva.

Reference list (download this list as a Word document)

  1. Yang, C. C., & Veltri, P. (2015). Intelligent healthcare informatics in big data era. Artificial Intelligence in Medicine, 65(2), 75–77. https://doi.org/10.1016/j.artmed.2015.08.002
  2. Qayyum, A., Anwar, S. M., Awais, M., & Majid, M. (2017). Medical image retrieval using deep convolutional neural network. Neurocomputing, 266, 8–20. https://doi.org/10.1016/j.neucom.2017.05.025
  3. Li, Z., Zhang, X., Müller, H., & Zhang, S. (2018). Large-scale retrieval for medical image analytics: A comprehensive review. Medical Image Analysis, 43, 66–84. https://doi.org/10.1016/j.media.2017.09.007
  4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  5. Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., & Zheng, Y. (2016). Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science, 90, 200–205. https://doi.org/10.1016/j.procs.2016.07.014
  6. Ramzan, M., Shafique, A., Kashif, M., & Umer, M. (2017). Gait Identification using Neural Network. International Journal of Advanced Computer Science and Applications, 8(9). https://doi.org/10.14569/IJACSA.2017.080909
  7. Kidziński, Ł., Delp, S., & Schwartz, M. (2019). Automatic real-time gait event detection in children using deep neural networks. PLOS ONE, 14(1), e0211466. https://doi.org/10.1371/journal.pone.0211466
  8. Horst, F., Lapuschkin, S., Samek, W., Müller, K.-R., & Schöllhorn, W. I. (2019). Explaining the Unique Nature of Individual Gait Patterns with Deep Learning. Scientific Reports, 9(1), 2391. https://doi.org/10.1038/s41598-019-38748-8
  9. Cai, T., Giannopoulos, A. A., Yu, S., Kelil, T., Ripley, B., Kumamaru, K. K., … Mitsouras, D. (2016). Natural Language Processing Technologies in Radiology Research and Clinical Applications. RadioGraphics, 36(1), 176–191. https://doi.org/10.1148/rg.2016150080
  10. Jackson, R. G., Patel, R., Jayatilleke, N., Kolliakou, A., Ball, M., Gorrell, G., … Stewart, R. (2017). Natural language processing to extract symptoms of severe mental illness from clinical text: The Clinical Record Interactive Search Comprehensive Data Extraction (CRIS-CODE) project. BMJ Open, 7(1), e012012. https://doi.org/10.1136/bmjopen-2016-012012
  11. Kreimeyer, K., Foster, M., Pandey, A., Arya, N., Halford, G., Jones, S. F., … Botsis, T. (2017). Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. Journal of Biomedical Informatics, 73, 14–29. https://doi.org/10.1016/j.jbi.2017.07.012
  12. Montenegro, J. L. Z., Da Costa, C. A., & Righi, R. da R. (2019). Survey of Conversational Agents in Health. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2019.03.054
  13. Carrell, D. S., Schoen, R. E., Leffler, D. A., Morris, M., Rose, S., Baer, A., … Mehrotra, A. (2017). Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings. Journal of the American Medical Informatics Association, 24(5), 986–991. https://doi.org/10.1093/jamia/ocx039
  14. Oña, E. D., Cano-de la Cuerda, R., Sánchez-Herrera, P., Balaguer, C., & Jardón, A. (2018). A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb. Journal of Healthcare Engineering, 2018, 1–19. https://doi.org/10.1155/2018/9758939
  15. Krebs, H. I., & Volpe, B. T. (2015). Robotics: A Rehabilitation Modality. Current Physical Medicine and Rehabilitation Reports, 3(4), 243–247. https://doi.org/10.1007/s40141-015-0101-6
  16. Leng, M., Liu, P., Zhang, P., Hu, M., Zhou, H., Li, G., … Chen, L. (2019). Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research, 271, 516–525. https://doi.org/10.1016/j.psychres.2018.12.032
  17. Jennifer Piatt, P., Shinichi Nagata, M. S., Selma Šabanović, P., Wan-Ling Cheng, M. S., Casey Bennett, P., Hee Rin Lee, M. S., & David Hakken, P. (2017). Companionship with a robot? Therapists’ perspectives on socially assistive robots as therapeutic interventions in community mental health for older adults. American Journal of Recreation Therapy, 15(4), 29–39. https://doi.org/10.5055/ajrt.2016.0117
  18. Troccaz, J., Dagnino, G., & Yang, G.-Z. (2019). Frontiers of Medical Robotics: From Concept to Systems to Clinical Translation. Annual Review of Biomedical Engineering, 21(1). https://doi.org/10.1146/annurev-bioeng-060418-052502
  19. Riek, L. D. (2017). Healthcare Robotics. ArXiv:1704.03931 [Cs]. Retrieved from http://arxiv.org/abs/1704.03931
  20. Kappassov, Z., Corrales, J.-A., & Perdereau, V. (2015). Tactile sensing in dexterous robot hands — Review. Robotics and Autonomous Systems, 74, 195–220. https://doi.org/10.1016/j.robot.2015.07.015
  21. Choi, C., Schwarting, W., DelPreto, J., & Rus, D. (2018). Learning Object Grasping for Soft Robot Hands. IEEE Robotics and Automation Letters, 3(3), 2370–2377. https://doi.org/10.1109/LRA.2018.2810544
  22. Shortliffe, E., & Sepulveda, M. (2018). Clinical Decision Support in the Era of Artificial Intelligence. Journal of the American Medical Association.
  23. Attema, T., Mancini, E., Spini, G., Abspoel, M., de Gier, J., Fehr, S., … Sloot, P. M. A. (n.d.). A new approach to privacy-preserving clinical decision support systems. 15.
  24. Castaneda, C., Nalley, K., Mannion, C., Bhattacharyya, P., Blake, P., Pecora, A., … Suh, K. S. (2015). Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. Journal of Clinical Bioinformatics, 5(1). https://doi.org/10.1186/s13336-015-0019-3
  25. Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 178(11), 1544. https://doi.org/10.1001/jamainternmed.2018.3763
  26. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. ArXiv:1804.02969 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1804.02969
  27. Weng, S. F., Reps, J., Kai, J., Garibaldi, J. M., & Qureshi, N. (2017). Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS ONE, 12(4), e0174944. https://doi.org/10.1371/journal.pone.0174944
  28. Suresh, H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical Intervention Prediction and Understanding using Deep Networks. ArXiv:1705.08498 [Cs]. Retrieved from http://arxiv.org/abs/1705.08498
  29. Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689
  30. Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA, 319(1), 19. https://doi.org/10.1001/jama.2017.19198
Categories
AI clinical

Ontario is trying a wild experiment: Opening access to its residents’ health data

This has led companies interested in applying AI to healthcare to find different ways to scoop up as much data as possible. Google partnered with Stanford and Chicago university hospitals to collect 46 billion data points on patient visits. Verily, also owned by Google’s parent company Alphabet, is recruiting 10,000 people for its own long-term health studies. IBM has spent the last few years buying up health companies for their data, accumulating records on more than 300 million people.

Source: Gershgorn, D. (2018). Ontario is trying a wild experiment: Opening access to its residents’ health data.

I’ve pointed to this problem before; it’s important that we have patient data repositories that are secure and maintain patient privacy but we also need to use that data to make better decisions about patient care. Just like any research project needs carefully managed (and accurate) data, so too will AI-based systems. At the moment, this sees a huge competitive advantage accrue to companies like Google, that can afford to buy that data indirectly by acquiring smaller companies. But even that isn’t sustainable because there’s “no single place where all health data exists”.

This decision by the Ontario government seems to be a direct move against the current paradigm. By making patient data available to via an API, researchers will be able to access only the data approved for specific uses by patients, and it can remain anonymous. They get the benefit of access to enormous caches of health-related information while patient privacy is simultaneously protected. Of course, there are challenges that will need to be addressed including issues around security, governance, differing levels of access permissions.

And that’s just the technical issues (a big problem since medical software is often poorly designed). That doesn’t take into account the ethics of making decisions about individual patients based on aggregate data. For example, if an algorithm suggests that other patients who look like Bob tend not to follow medical advice and default on treatment, should medical insurers deny Bob coverage? These and many other issues will need to be resolved before AI in healthcare can really take off.

Categories
AI clinical

The first AI approved to diagnose disease is tackling blindness in rural areas

There are any number of reasons why people don’t get medical care or don’t follow up on a referral to a specialist. They might not think they have a serious problem. They might lack time off work, reliable transportation, or health insurance. And those are problems AI alone can’t solve.

Source: Mullin, E. (2018). The first AI approved to diagnose disease is tackling blindness in rural areas.

There’s a good point to be made here; an algorithm may be 100% accurate in diagnosing a condition but the system can still fail for many reasons, one of which may be the all too human characteristic of ignoring medical advice. Of course, there are many good reasons for why we may not be able to follow the advice, which is mentioned in the article. However, the point is that, even if an algorithm gets it absolutely right, it may still not be the solution to the problem.

Note: I mentioned this story a few posts ago. It’s going to be interesting to follow it and see how the system fares in the uncertainty of real-world situations.

Categories
AI clinical

Dina Katabi: A new way to monitor vital signs (that can see through walls) | TED Talk

So if you think about it, wireless signals, they travel through space, they go through obstacles and walls and occlusions, and some of them, they reflect off our bodies, because our bodies are full of water, and some of these minute reflections, they come back. And if, just if, I had a device that can just sense these minute reflections, then I would be able to feel people that I cannot see. So I started working with my students on building such a device, and I want to show you some of our early results.

So here is our device, transmitting very low power wireless signal, analyzes the reflections using AI and spits out the sleep stages throughout the night. So we know, for example, when this person is dreaming. Not just that … we can even get your breathing while you are sitting like that, and without touching you. So he is sitting and reading and this is his inhales, exhales. We asked him to hold his breath, and you see the signal staying at a steady level because he exhaled. And I want to zoom in on the signal. These are the inhales, these are the exhales. And you see these blips on the signal? These are not noise. They are his heartbeats. And you can see them beat by beat.

Capturing heartbeat while the subject is holding breath. Using WiFi.

Further reading

Categories
AI clinical education

An introduction to artificial intelligence in clinical practice and education

Two weeks ago I presented some of my thoughts on the implications of AI and machine learning in clinical practice and health professions education at the 2018 SAAHE conference. Here are the slides I used (20 slides for 20 seconds each) with a very brief description of each slide. This presentation is based on a paper I submitted to OpenPhysio, called: “Artificial intelligence in clinical practice: Implications for physiotherapy education“.


The graph shows how traffic to a variety of news websites changed after Facebook made a change to their Newsfeed algorithm, highlighting the influence that algorithms have on the information presented to us, and how we no longer really make real choices about what to read. When algorithms are responsible for filtering what we see, they have power over what we learn about the world.


The graph shows the near flat line of social development and population growth until the invention of the steam engine. Before that all of the Big Ideas we came up with had relatively little impact on our physical well-being. If your grandfather spent his life pushing a plough there was an excellent chance that you’d spend your life pushing one too. But once we figured out how to augment our physical abilities with machines we saw significant advances in society and industry and an associated improvement in everyones quality of life.


The emergence of artificial intelligence in the form of narrowly constrained machine learning algorithms has demonstrated the potential for important advances in cognitive augmentation. Basically, we are starting to really figure out how to use computers to enhance our intelligence. However, we must remember that we’ve been augmenting our cognitive ability for a long time, from exporting our memories onto external devices, to performing advanced computation beyond the capacity of our brains.


The enthusiasm with which modern AI is being embraced is not new. The research and engineering aspects of artificial intelligence have been around since the 1950s, while fictional AI has an even longer history. The field has been through a series of highs and lows (called AI Winters). The developments during these cycles were fueled by what has become known as Good Old Fashioned AI; early attempts to explicitly design decision-making into algorithms by hard coding all possible variations of the interactions in a closed-environment. Understandably, these systems were brittle and unable to adapt to even small changes in context. This is one of the reasons that previous iterations of AI had little impact in the real world.


There are 3 main reasons why it’s different this time. The first is the emergence of cheap but powerful hardware (mainly central and graphics processing units), which has seen computational power growing by a factor of 10 every 4 years. The second characteristic is the exponential growth of data, and massive data sets are an important reason that modern AI approaches have been so successful. The graph in the middle column is showing data growth in Zettabytes (10 to the power of 21). At this rate of data growth we’ll run out metric system in a few years (Yotta is the only allocation after Zetta). The third characteristic of modern AI research is the emergence of vastly improved machine learning algorithms that are able to learn without being explicitly told what to learn. In the example here, the algorithm has coloured in the line drawings to create a pretty good photorealistic image, but without being taught any of the concepts i.e. human, face, colour, drawing, photo, etc.


We’re increasingly seeing evidence that in some very narrow domains of practice (e.g. reasoning and information recall), machine learning algorithms can outdiagnose experienced clinicians. It turns out that computers are really good at classifying patterns of variables that are present in very large datasets. And diagnosis is just a classification problem. For example, algorithms are very easily able to find sets of related signs and symptoms and put them into a box that we call “TB”. And increasingly, they are able to do this classification better than the best of us.


It is estimated that up to 60% of a doctors time is spent capturing information in the medical record. Natural language processing algorithms are able to “listen” to the ambient conversation between a doctor and patient, record the audio and transcribe it (translating it in the process if necessary). It then performs semantic analysis of the text (not just keyword analysis) to extract meaningful information which it can use to populate an electronic health record. While the technology is in a very early phase and not yet safe for real world application it’s important to remember that this is the worst it’s ever going to be. Even if we reach some kind of technological dead end with respect to machine learning and from now on we only increase efficiency, we are still looking at a transformational technology.


An algorithm recently passed the Chinese national medical exam, qualifying (in theory) as a physician. While we can argue that practising as a physician is more than writing a text-based exam, it’s hard not to acknowledge the fact that – at the very least – machines are becoming more capable in the domains of knowledge and reasoning that characterise much of clinical practice. Again, this is the worst that this technology is ever going to be.


This graph shows the number of AI applications under development in a variety of disciplines, including medicine (on the far right). The green segment shows the number of applications where AI is outperforming human beings. Orange segments show the number of applications that are performing relatively well, with blue highlighting areas that need work. There are two other points worth noting: medical AI is the area of research that is clearly showing the most significant advances (maybe because it’s the area where companies can make the most money); and all the way at the far left of the graph is education, showing that there may be some time before algorithms are showing the same progress in teaching.


Contrary to what we see in the mainstream media, AI is not a monolithic field of research; it consists it consists of a wide variety of different technologies and philosophies that are each sometimes referred to under the more general heading of “AI”. While much of the current progress is driven by machine learning algorithms (which is itself driven by the 3 characteristics of modern society highlighted earlier), there are many areas of development, each of which can potentially contribute to different areas of clinical practice. For the purposes of this presentation, we can define AI as any process that is able to independently achieve an objective within a narrowly constrained domain of interest (although the constraints are becoming looser by the day).


Machine learning is a sub-domain of AI research that works by exposing an algorithm to a massive data set and asking it to look for patterns. By comparing what it finds to human-tagged patterns in the data, developers can fine-tune the algorithm (i.e. “teach it) before exposing it to untagged data and seeing how well it performs relative to the training set. This generally describes the “learning” process of machine learning. Deep learning is a sub-domain of machine learning that works by passing data through many layers, allocating different weights to the data at each layer, thereby coming up with a statistical “answer” that expresses an outcome in terms of probability. Deep learning neural networks underlie many of the advances in modern AI research.


Because machine and deep learning algorithms are trained on (biased) human-generated datasets, it’s easy to see how the algorithms themselves will have an inherent bias embedded in the outputs. The Twitter screenshot shows one of the least offensive tweets from Tay, an AI-enabled chatbot created by Microsoft, which learned from human interactions on Twitter. In the space of a few hours, Tay became a racist, sexist, homophobic monster – because this is what it learned from how we behave on Twitter. This is more of an indictment of human beings than it is of the algorithm. The other concern with neural networks is that, because of the complexity of the algorithms and the number of variables being processed, human beings are unable to comprehend how the output was computed. This has important implications when algorithms are helping with clinical decision-making and is the reason that resources are being allocated to the development of what is known as “explainable AI”.


As a result of the changes emerging from AI-based technologies in clinical practice we will soon need to stop thinking of our roles in terms of “professions” and rather in terms of “tasks”. This matters because increasingly, many of the tasks we associate with our professional roles will be automated. This is not all bad news though, because it seems probable that increased automation of the repetitive tasks in our repertoire will free us up to take on more meaningful tasks, for example, having more time to interact with patients. We need to start asking what are the things that computers are better at and start allocating those tasks to them. Of course, we will need to define what we mean by “better”; more efficient, more cost-effective, faster, etc.


Another important change that will require the use of AI-based technologies in clinical practice will be the inability of clinicians to manage – let alone understand – the vast amount of information being generated by, and from, patients. Not only are all institutional tests and scans digital but increasingly, patients are creating their own data via wearables – and soon, ingestibles – all of which will require that clinicians are able to collect, filter, analyse and interpret these vast streams of information. There is evidence that, without the help of AI-based systems, clinicians simply will not have the cognitive capacity to understand their patients’ data.


The impact of more patient-generated health data is that we will see patients being in control of their data, which will exist on a variety of platforms (cloud storage, personal devices, etc.), none of which will be available to the clinician by default. This means that power will move to the patient as they make choices about who to allow access to their data in order to help them understand it. Clinicians will need to come to terms with the fact that they will no longer wield the power in the relationship and in fact, may need to work within newly constituted care teams that include data scientists, software engineers, UI designers and smart machines. And all of these interactions will be managed by the patient who will likely be making choices with inputs from algorithms.


The incentives for enthusiastic claims around developments in AI-based clinical systems are significant; this is an acdemic land grab the likes of which we have only rarely experienced. The scale of the funding involved puts pressure on researchers to exaggerate claims in order to be the first to every important milestone. This means that clinicians will need to become conversant with the research methods and philosophies of the data scientists who are publishing the most cutting edge research in the medical field. The time will soon come when it will be difficult to understand the language of healthcare without first understanding the language of computer science.


The implications for health professions educators are profound, as we will need to start asking ourselves what we are preparing our graduates for. When clinical practice is enacted in an intelligent environment and clinicians are only one of many nodes in vast information networks, what knowledge and skills do they need to thrive? When machines outperform human beings in knowledge and reasoning tasks, what is the value of teaching students about disease progression, for example? We may find ourselves graduating clinicians who are well-trained, competent and irrelevant. It is not unreasonable to think that the profession called “doctor” will not exist in 25 years time, having been superseded by a collective of algorithms and 3rd party service providers who provide more fine-grained services at a lower cost.


There are three new literacies that health professions educators will need to begin integrating into our undergraduate curricula. Data literacy, so that healthcare graduates will understand how to manage, filter, analyse and interpret massive sets of information in real-time; Technological literacy, as more and more of healthcare is enacted in digital spaces and mediated by digital devices and systems; and Human literacy, so that we can become better at developing the skillsets necessary to interact more meaningfully with patients.


There is evidence to suggest that, while AI-based systems outperform human beings on many of the knowledge and reasoning tasks that make up clinical practice, the combination of AI and human originality results in the most improved outcomes of all. In other words, we may find that patient outcomes are best when we figure out how to combine human creativity and emotional response with machine-based computation.


And just when we’re thinking that “creativity” and “originality” are the sole province of human beings, we’re reminded that AI-based systems are making progress in those areas as well. It may be that the only way to remain relevant in a constantly changing world is to develop our ability to keep learning.

Categories
twitter feed

Twitter Weekly Updates for 2011-04-25

Categories
ethics physiotherapy

Ethics CPD lecture

As part of our commitment to continuing professional development (CPD) in South African healthcare, we’re required to accumulate 5 ethics credits every year. Yesterday I gave a presentation to the staff in our department in order to fulfill this requirement. It went quite well, although being my first time I felt pretty unprepared.

I learnt a lot from the experience and together with the feedback I got from my colleagues, will be able to refine the workshop for next year. One of the main suggestions was to add more interactivity to the session. I definitely agree that this is one area I could’ve improved on, especially with the view to making it more dynamic.

Categories
conference education health research

SAAHE keynote – Improving health professions education to improve health (Bill Burdick)

I’m going to split my blog posts up according to the different sessions, just for ease of reference i.e. a few posts, rather than one very long one. Here are my notes from the first keynote of the day, from Professor Bill Burdick.

If you don’t continue the momentum for change, you’re going to be left behind

We need to start system capacity building at the undergraduate level

Presentation made good use of Gapminder (started by Hans Rosling to track human development trends)

It turns out that GDP isn’t the most important factor in determining life expectancy, nor is the number of doctors / 1000 population, nor is sanitation and literacy, although there is an increasing trend for each of these variables. Health spending as a % of GDP also isn’t the major factor. Changing each of these independent variables isn’t going to necessarily enhance life expectancy, but changing all of them will.

Fewer children per woman = greater life expectancy, also the younger a woman is at marriage, the earlier she dies

Taking these factors into account, what must we as health educators do to have an impact on improving health?

Academics have the skills to pull in, analyse and interpret data, and to disseminate the resultant new knowledge, which clinicians need to make evidence based decisions to enhance clinical care.

It is important for academics / health educators to integrate with the public sector by engaging with the community, training other health workers, incorporate health professionals in the management sector, and to engage with public policy makers

Ruth Levine – Case studies in global health: millions saved (freely available report):

  • Health interventions have worked even in poor countries
  • Donor funding saves lives
  • Saving lives saves money
  • Partnership is powerful
  • National governments can get the job done
  • Health behaviours can be changed\
  • Successful programmes can take many forms

Health education by itself cannot improve health

Is our curriculum aligned with any of the following factors?

  • Water
  • Sanitation
  • Fertility
  • Literacy
  • Social integration
  • Access to healthcare
  • Nutrition

Discussion of the above can easily be integrated into any case study but faculty may need support during the change

Start system capacity building with undergraduates

  • Teach leadership and management skills → students can be better at facilitating community change with these skills
  • Add interdisciplinary education to improve subsequent team work
  • Integrate rural practitioners into the faculty role
  • Create systems for knowledge sharing (academia ↔ community)

Positive deviance inquiry – technique to introduce behavioural change in communities

Lessons to learn from the Brazilian health education system

  • Curriculum guidelines should emphasise local needs
  • Government and medical school leaders attend educational meetings together (integration of ministry of health and ministry of education)

If any of this is to make an impact in health outcomes, institutions must have institutional goals that reflect a desire to improve health → then faculty promotion can be linked to institutional goals

Categories
assignments education physiotherapy students

Giving students a voice in Physiotherapy Ethics

I’ve been going through some of the “Professional Ethics” assignments I received from our third year physiotherapy students, and wanted to share this one with you (with the students’ permission). It was written by Basil Buthelezi, and which I think really showcases the wonderful talents our students have, which we would never usually encounter because we focus so much energy on the clinical component of physiotherapy education.

The assignment was to explore the theme of Human rights in South African healthcare, using any media that the students wanted. So far, I’ve received a fictional newspaper front page (which I’m hoping to put up here as well), been directed to this blog, and now this poem by Basil. I wanted to share it because I think it illustrates the potential that students have to amaze us when we give them the opportunity to speak with their own voices. Here’s the poem by Basil Buthelezi…

Site of entertainment (voices personalising HIV / AIDS)

I’m all over,
From the person next to you,
In the neighbourhood and,
All four corners of the world.

They all bow for me,
From TB to Cancer,
From strokes to the paralysed,
Beautiful or ugly,
From infants to the elderly,
Rich or poor,
White or black, “colour with no discrimination”,
But all the negativities in me.

Fair enough,
I’m tired of tears and the angry faces of stranded orphans,
Hopeless,
Harmless,
Hungry,
Homeless,
Their tears have given birth to an ocean.
Yes, my throat is dry, but I can’t drink in this ocean because it’s dirty,
All infected, the attack of vampires is in full swing,
Kill them, kill them all!!
Seize the duplication.

Dollars and dollars,
I have explored their pockets and robbed their monies,
Monies buying antiretrovirals
To keep me low, yet
The dead sentence is coming.

Graves and graves,
If they were coloured red
This world will be red, red
Red for danger
Red bloody red.

The equation is shifting,
Outplaying the moments of pleasure,
Abstain to restore the equilibrium
“Be faithful” is a song of goodwill.

If not!
Pause, before you explore the site of entertainment,
Have you worn a jacket to protect you,
To protect you from hot and juicy stuff?
I know you want to be happy down there…,
But you need a license to enjoy,
Cause I’m like a vampire waiting to attack
And destroy the essence of your life.

Basil Buthelezi (2009)