Comment: Separating the Art of Medicine from Artificial Intelligence

…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.

This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:

  1. He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
  2. He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
  3. He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.

These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.

Facebook and NYU Using AI to Speed Up MRIs

The Facebook/NYU partnership is working to minimize the amount of data that is captured, instead relying on computers to reconstruct the image from imperfect inputs. If this is successful, we may see a 10x reduction in scan times, which would lead to lower costs for MRIs and a much greater utilization of these machines

Source: Facebook and NYU Using AI to Speed Up MRIs.

Machine learning is successful partly because of the ability to make accurate inferences with respect to missing data. In other words, machine learning algorithms predict outcomes based on patterns observed in the data, even when important information is missing. The article describes a project that sees ML algorithms fill in the missing data that occurs when an MRI scan is done too quickly. Currently, scans take a long time because we can’t see the image properly when the data is missing. But if an algorithm can infer what that missing data is with a high level of accuracy, then we can afford to do the scans more quickly and fill in the data algorithmically.

I wonder if there’s a risk of missing something that would have turned up with current scans. For example, a tumour that doesn’t show up in the AI-moderated scan because the algorithm didn’t infer it’s presence.

MIT Creates AI to Optimize Brain Cancer Treatment

The goal [with chemotherapy] is basically to poison the tumor cells faster than non-cancerous cells, but the side effects of going after an aggressive disease like this can be devastating. These traditional treatment schedules don’t take into account differences in tumor size, medical histories, genetic profiles, and biomarkers. The system developed by MIT does that, resulting in lowered dosages and sometimes even skipping doses altogether.

Source: Whitwam, R. (2018). MIT Creates AI to Optimize Brain Cancer Treatment.

Machine learning algorithms that modify the dosage and frequency of cancer medication, significantly reducing the overall toxicity of the treatment. These results are only in simulations though, so we we’ll have to wait and see what happens when the new regimes are implemented in RCTs with real patient populations. Sounds promising though.