The researchers tested it against dozens of bacterial strains isolated from patients and grown in lab dishes, and found that it was able to kill many that are resistant to treatment, including Clostridium difficile, Acinetobacter baumannii, and Mycobacterium tuberculosis.
Something that stood out for me in this short article is the scale that machine learning models work at. The algorithm in this trial explored more than a hundred million compounds in a few days, something that would be prohibitively expensive, if not impossible, if attempted with more traditional methods.
What’s also promising is that this antibiotic seems to work in a way that makes it difficult for bacteria to develop a resistance to, another example of how AI is a very different kind of intelligence to human intelligence. It won’t only explore much larger problem spaces than we’re capable of but could possibly develop solutions to the problems we care about in very different ways.
You may not think of this as being creative but regardless of what you call it, you have to acknowledge that it’s effective.
…we installed cheap depth sensors that can collect human behavior data on patients and clinicians without infringing on their privacy, because these are not photo grabs of people’s faces and identities. With that information, we can observe longitudinally, 24/7, whether proper care is being given to our patients and provide feedback in the health delivery system.
I didn’t hear what the number one wish was (I was driving to work and may have been distracted for a moment) but the conversation is generally worth listening to. Topol and Li both have good insight into the application of AI in clinical contexts and the conversation touches on some of the technical aspects of AI (e.g. bias, training machine learning algorithms, labeled datasets, etc.) while staying accessible for listeners who are unfamiliar with the details.
One of the standout bits for me was the discussion around how the use of depth sensors in an ICU can generate data that an AI can use to map the behaviour of staff within the unit, to the extent that it can tell whether or not basic levels of care are being met. You might have concerns about issues of privacy and the surveillance of staff but if one of my family members were in an ICU, I know that I’d want to know if everyone is washing their hands appropriately.
The link above includes a transcript of the conversation.
The results were comparable to or exceeded clinical standards, with an overall success rate of 87% for the 31 participants whose blood was drawn. For the 25 people whose veins were easy to access, the success rate was 97%. The device includes an ultrasound image-guided robot that draws blood from veins. A fully integrated device, which includes a module that handles samples and a centrifuge-based blood analyzer, could be used at bedsides and in ambulances, emergency rooms, clinics, doctors’ offices and hospitals.
This is another example of the kinds of tasks that will increasingly be performed by machines. You can argue that certain patient populations (e.g. young children, patients with mental health issues, etc.) will always need a human being performing the technique for safety reasons. And this is likely to be true for a long time. But those situations account for only a minority of the venipunctures performed; the bulk of this work will soon be done by robots that are cheaper, faster and cause less damage than human clinical staff.
Nurses are unlikely to be replaced any time soon because their work includes so much more than drawing blood. But the tasks we expect them to perform are certainly going to change. How are health professions educators in the undergraduate curriculum working to get ahead of those changes?
…the algorithm doesn’t use social media postings because that data is too messy. But he does have one trick up his sleeve: access to global airline ticketing data that can help predict where and when infected residents are headed next. It correctly predicted that the virus would jump from Wuhan to Bangkok, Seoul, Taipei, and Tokyo in the days following its initial appearance.
It’s important to remember that clinical AI isn’t only going to influence how individuals interact with each other. AI-based systems that aggregate and interpret massive volumes of information moving across multiple networks is going to help us respond to medical emergencies at national and international levels. These systems won’t rely on official sources of information, like governments or peer-reviewed publications, or even unofficial sources like Twitter, but rather on the collective behaviour of thousands of people who are just going about their day.
In the same way that simply having a phone in your car while driving means that Google can make predictions about traffic throughout the day, systems that track our behaviour over time will help healthcare professionals make sense of important, large-scale events that are impossible for human beings to predict.
The problem of overdiagnosis is often mentioned in relation to two common cancers: breast and prostate. In both cases, enhanced technology is already detecting small abnormalities that may never result in harm during a lifetime. Machine-learning may trump human interpretation but merely making a diagnosis does not bring us closer to the truth about the impact of the finding. In other words, will the cancer ever cause symptoms, and crucially, will the patient die from it? How will the knowledge of cancer alter the rest of a person’s days?
I’m not a fan of the way the author starts the article; it feels a bit contrived and unlikely to reflect the patient experience of healthcare around the world. But I think that the point the author is making is that there are certain aspects of healthcare that AI and robots aren’t going to replace (she could probably have just said that?).
So yes, AI is already “better” than human beings in several different areas (e.g. diagnostics, interpretation of findings, image recognition, etc.). But no, that doesn’t mean that healthcare professionals will be replaced. Because being a doctor/physio/nurse means that we are more than interpreters of results; we are human beings in communion with other human beings. While the features of AI in clinical practice don’t mean that we’re going to see the replacement of professions, they do mean that we might see the replacement of tasks within professions.
Unfortunately, the article doesn’t get to this point and simply concludes that, because all the tasks of a doctor can’t be replaced, the question is moot. But it’s the wrong question to ask. We’re not going to replace health care providers with smart humanoid robots but we’ll definitely see changes in professional training and in clinical practice.
The implications of this are that, in order to remain relevant, professions in the near future will need to demonstrate an ability to take advantage of the benefits of advanced technologies while adapting and expanding the relationship-centred aspects of health care.
I’ve started working on what will eventually become a curated library of resources that I’m using for my research on the impact of artificial intelligence and machine learning on clinical practice. At the moment it’s just a public repository of the articles, podcasts, blog posts that I’ve read or listened to and then saved in Zotero. You can subscribe to the feed so that when new items are added you’ll get a notification in whatever feedreader you use. Click on the image below to see the library.
For now, it’s a public – but closed – group that has a library, meaning that anyone can see the list of library items but no-one can join the group, which means no-one else can add, edit or delete resources (for now). This is just because I’m still figuring out how it works and don’t want the additional admin of actually managing anything. I may open this up in future if it looks like anyone else is interested in joining and contributing. I’m also not sharing any of the original articles and books but will look into the implications of sharing these publicly, considering that most of them – being academic articles – are subject to copyright restrictions from the publishers.
The library/repository isn’t meant to be exhaustive but rather a small selection of articles and other resources that I think might be useful for clinicians, educators, students and researchers with an interest in AI in healthcare. At the moment it’s just a dump of some of the resources I’ve used and include notes and links associated with the resources. I’m going to revisit the items in the list and try to add more useful summaries and descriptions of everything with the idea that this could be something like a curated, annotated reading/watching/listening list for anyone with an interest in the topic.
You could argue that because these pictures are designed to fool AI, it’s not exactly a fair fight. But it’s surely better to understand the weaknesses of these systems before we put our trust in them.
This is an important issue to be aware of…the published studies on how AI is vastly superior to human perception may be true only in very narrow, tightly controlled situations. If we’re not aware of that we may be willing to place too much trust in systems that are fundamentally biased or inaccurate when it comes to performance in the real world.
For example, consider decision-making in expert systems (something like IBMs Watson) where the system is trained on retrospective data, usually from places where they have a lot of data. This might translate into the system making suggestions for patient management based on what has been done in the past, in circumstances that are completely different to the current context. If I’m a family practitioner practising in rural South Africa, it may not be that useful to know what an expert oncologist in Boston would have done in a similar situation.
It’s unlikely that the management options provided by the system are feasible for implementation because of differences in people, culture, language, society, health systems, etc. But unless I know that the data my expert system was trained on is contextually flawed, I may simply go ahead and then have no idea why it fails. It’s important to test AI systems in situations where we know they’ll break before we roll them out in the real world.
AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they’re optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don’t have any concept of what this content actually is, and they don’t have any concept of what the consequences might be of recommending this content.
We don’t need to worry about AI that is conscious (yet), only that it is competent and that we’ve given it a poorly considered problem to solve. When we think about the solution space for AI-based systems we need to be aware that the “correct” solution for the algorithm is one that literally solves the problem, regardless of the method.
This matters in almost every context we care about. Consider the following scenario. ICUs are very expensive for a lot of good reasons; they have a very specialised workforce, a very low staff to patient ratio, the time spent with each patient is very high, and the medication is crazy expensive. We might reasonably ask an AI to reduce the cost of running an ICU, thinking that it could help to develop more efficient workflows, for example. But the algorithm might come to the conclusion that the most cost-effective solution is to kill all the patients. According to the problem we proposed, this isn’t incorrect but it’s clearly not what we were looking for, and any human being on earth, including small children, will understand why.
Before we can ask AI-based systems to help solve problems we care about, we’ll need to first develop a language for communicating with them. A language that includes the common sense parameters that inherently bound all human-human conversation. When I ask a taxi driver to take me to the airport “as quickly as possible”, I don’t also need to specify that we shouldn’t break any rules of driving, and that I’d like to arrive alive. We both understand the boundaries that define the limits of my request. As the video above shows, an AI doesn’t have any “common sense” and this is a major obstacle for progress towards having AI that can address real world problems beyond the narrow contexts where they are currently competent.
This study gives examples for implementing technology-facilitated approaches and provides the following recommendations for conducting such longitudinal, sensor-based research, with both environmental and wearable sensors in a health care setting: pilot test sensors and software early and often; build trust with key stakeholders and with potential participants who may be wary of sensor-based data collection and concerned about privacy; generate excitement for novel, new technology during recruitment; monitor incoming sensor data to troubleshoot sensor issues; and consider the logistical constraints of sensor-based research.
We’re going to be seeing more and more of this type of research in healthcare organisations, which I think is a good thing, given the following caveats (I’m sure that there are many more):
We still need to be critical about how sensors record data, what kind of data they record, and what kinds of questions are prioritised with this type of research.
Knowing more about how bodies work at the physiological level doesn’t say anything about the social, political, ethical, etc. factors that are responsible for the bigger health issues of our time e.g. chronic diseases of life.
Behaviour can be tracked but the underlying beliefs that drive behaviour are still opaque. We need to be careful not to confuse behaviour with reasons for that behaviour.
The reason I think that sensor-based research is, in general, a good thing is because the questions that you’re likely to ask in these kinds of studies are the same questions that we currently use observation and participant self-report to answer. We know that these forms of data collection are inherently unreliable so it’s interesting to see people trying to address this.
However, even assuming that sensor-based studies are more reliable (and we would first need to ask, reliable against what outcomes?), having more reliable data says little about whether the questions and corresponding data are valid. In other words, we need to be careful that that date being collected is appropriate for answering the types questions we’re asking.
Finally, it stands to reason that once we have the data on the behaviour (the easy part) we still need to do the hard research that gets at the underlying reasons for why people behave in the way that they do. Simply knowing that people tend to do X is only the first step. Understanding why they do X and not Y is another step (possibly determined by interviews for FGDs), and then presumably trying to get them to change their behaviour may be the hardest part of all.
The new machine learning-powered service, Amazon Transcribe Medical, will allow physicians to quickly dictate their clinical notes and speech into accurate text in real time, without any human intervention, Amazon claims.
I use voice recognition on my phone fairly often and am always impressed by the quality of the notes it makes. And with how quickly it improves. But when it comes to unusual words, like the ones we use in healthcare, natural language processing (NLP) on the phone is left wanting.
This demo from the AWS re:Invent conference goes some way to show how much easier things are going to get. Once the text is accurately recognised, a semantic system will then be able to “make sense” of the text and enter it into an EHR before making suggestions for appropriate follow up appointments, discharge notes, medical prescription, etc. We live in interesting times.