…we installed cheap depth sensors that can collect human behavior data on patients and clinicians without infringing on their privacy, because these are not photo grabs of people’s faces and identities. With that information, we can observe longitudinally, 24/7, whether proper care is being given to our patients and provide feedback in the health delivery system.Topol, E. & Li, F. (2020). Clinicians’ ‘Number-One Wish’ for Artificial Intelligence. Medicine and the Machine podcast. Medscape.
I didn’t hear what the number one wish was (I was driving to work and may have been distracted for a moment) but the conversation is generally worth listening to. Topol and Li both have good insight into the application of AI in clinical contexts and the conversation touches on some of the technical aspects of AI (e.g. bias, training machine learning algorithms, labeled datasets, etc.) while staying accessible for listeners who are unfamiliar with the details.
One of the standout bits for me was the discussion around how the use of depth sensors in an ICU can generate data that an AI can use to map the behaviour of staff within the unit, to the extent that it can tell whether or not basic levels of care are being met. You might have concerns about issues of privacy and the surveillance of staff but if one of my family members were in an ICU, I know that I’d want to know if everyone is washing their hands appropriately.
The link above includes a transcript of the conversation.