Michael Rowe

Trying to get better at getting better

Khetpal, V., & Shah, N. (2021, May 28). How a largely untested AI algorithm crept into hundreds of hospitals. Fast Company.

The use of algorithms to support clinical decision-making isn’t new. But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic’s Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable’s impact on the score, we aren’t allowed under the hood to evaluate the raw data and calculations. Furthermore, the Deterioration Index was not independently validated or peer-reviewed before the tool was rapidly deployed to America’s largest healthcare systems. Even now, there have been, to our knowledge, only two peer-reviewed published studies of the index. The deployment of a largely untested proprietary algorithm into clinical practice—with minimal understanding of the potential unintended consequences for patients or clinicians—raises a host of issues.

This seems like a really bad idea. We’re used to having clinical interventions spend years going through rigorous testing, peer review, clinical trials, etc. The idea that it’s possible to apply clinical decision support tools in a real-world setting, without anyone being able to check what it’s doing, is pretty scary. Is this a regulatory problem? A personal morality issue? I don’t know of a good solution to how we get around this.


Editorial (2021). The Guardian view on medical records: NHS data grab needs explaining. The Guardian.

The records being stored contain the most private details of a person’s life. The proposals suggest mass collection of every English patient’s history, including mental health episodes, their smoking and drinking habits, and diagnoses of diseases such as cancer. But it will also include dated instances of domestic violence, abortions, sexual histories and criminal offences. Given the proposed scope of such a database, it is reasonable to ask who will be given this data, and for what purpose.

Continuing with the theme of things-that-are-bad, we can’t have scenarios where institutions make unilateral decisions about capturing data at this scale. And the fact that it’s opt-out makes it even more troubling. I’m a fan of data and I think that healthcare institutions absolutely need to have access to enormous datasets if they’re going to be able to use health AI effectively. But it cannot be at the cost of patient autonomy with respect to their medical records. This doesn’t seem like a good way to build trust in the institutions that are responsible for public health.


Coiera, E. (2020). The cognitive health system. The Lancet, 395(10222), 463–466.

Artificial intelligence tools that directly influence human decisions and that will increasingly have autonomy to make decisions are now being added to this distributed cognitive system. Such a distributed network of humans and artificial intelligence is called a cybersocial system, with actions and outcomes emerging from the interacting decisions of both humans and machines. This cognitive health system might not be able to think the way humans do, but it will indirectly make decisions we never asked of it.

This article explores some of the implications of a health system that was sufficiently integrated with AI that the system itself would be capable of a kind of reasoning. The health system is already a sociotechnical system (sociotechnical systems describe the complex interplay between human and technology in determining the outcome of interactions) and I’m not entirely sure how the conception of a cybersocial system (introduced to me for the first time in this article) is very different. Regardless, the idea of a cognitive health system is interesting.


Rajkomar, A., & Oren, E. (2018, May 8). Deep Learning for Electronic Health Records. Google AI Blog.

When patients get admitted to a hospital, they have many questions about what will happen next. When will I be able to go home? Will I get better? Will I have to come back to the hospital? Having precise answers to those questions helps doctors and nurses make care better, safer, and faster — if a patient’s health is deteriorating, doctors could be sent proactively to act before things get worse.

There were a couple of things that stood out for me in this short blog post (which links to the peer-reviewed paper): the problems of scalability and accuracy with respect to traditional electronic health records and how this issue was approached; Fast Healthcare Interoperability Resources – a set of standards for health data exchange; and how this model could highlight for clinicians the variables that had the biggest influence on the model prediction (i.e. it’s a step towards explainable AI).


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.