…the only really useful value of artificial intelligence in chest radiography is, at best, to provide triage support — tell us what is normal and what is not, and highlight where it could possibly be abnormal. Just don’t try and claim that AI can definitively tell us what the abnormality is, because it can’t do so any more accurately than we can because the data is dirty because we made it thus.
This is a generally good article on the challenges of using poorly annotated medical data to train machine learning algorithms. However, there are three points that I think are relevant, which the author doesn’t address at all:
- He assumes that algorithms will only be trained using chest images that have been annotated by human beings. They won’t. In fact, I can’t see why anyone would do this anyway for exactly the reasons he states. What is more likely is that AI will look across a wide range of clinical data points and use the other points in association with the CXR to determine a diagnosis. So, if the (actual) diagnosis is a cardiac issue you’d expect the image to correlate with cardiac markers and assign less weight to infection markers. Likewise, if the diagnosis was pneumonia, you’d see changes in infection markers but wouldn’t have much weighting assigned to cardiac information. In other words, the analysis of CXRs won’t be informed by human-annotated reports; it’ll happen through correlation with all the other clinical information gathered from the patient.
- He starts out by presenting a really detailed argument explaining the incredibly low inter-rater reliability, inaccuracy and weak validity of human judges (in this case, radiologists) when it comes to analysing chest X-rays, but then ends by saying that we should leave the interpretation to them anyway, rather than algorithms.
- He is a radiologist, which should at least make one pause when considering the final recommendation is to leave things to the radiologists.
These points aside, the author makes an excellent case for why we need to make sure that medical data are clean and annotated with machine-readable tags. Well worth a read.
…nations that have begun to prepare for and explore AI will reap the benefits of an economic boom. The report also demonstrates how anyone who hasn’t prepared, especially in developing nations, will be left behind… In the developing world, in the developing countries or countries with transition economies, there is much less discussion of AI, both from the benefit or the risk side.
The growing divide between nations that are prepared for widespread automation and those that aren’t, between companies that can cut costs by replacing workers and the newly unemployed people themselves, puts us on a collision course for conflict and backlash against further developing and deploying AI technology
Source: Robitzski, D. (2018). If Artificial Intelligence Only Benefits a Select Few, Everyone Loses.
A short post that’s drawn mainly from the 64 page McKinsey report (Notes From the Frontier: Modeling the Impact of AI on the World Economy). This is something that I’ve tried to highlight when I’ve talked about this technology to skeptical colleagues; in many cases, AI in the workplace will arrive as a software update and will, therefore, be available in developing, as well as developed countries. This isn’t like buying a new MRI machine where the cost is in the hardware and ongoing support. The existing MRI machine will get an update over the internet and from now on it’ll include analysis of the image and automated reporting. And now the cost of running your radiology department at full staff capacity is starting to look more expensive than it needs to be. This says nothing of the other important tasks that radiologists perform; the fact is that a big component of their daily work includes classifying images, and for human beings, that ship has sailed. While in more developed economies it may be easier to relocate expertise within the same institution, I don’t think we’re going to have that luxury the developing world. If we’re not thinking about these problems today, we’re going to be awfully unprepared when that software update arrives.
“Human detection and identification is error-prone, inconsistent and inefficient. Perhaps most importantly, it’s not scalable,” says Morgan. “Newer imaging technologies are outstripping human capabilities to analyze the data we can produce.”
Source: Eagle-eyed machine learning algorithm outdoes human experts — ScienceDaily
The point here is that data is being generated faster than we can analyse and interpret it. Big data is not a storage problem, it’s an analysis problem. Yes, we’ve had large sets of data before (think, libraries) but no-one expected a human being to read through, and make sense of, all of it. Now that digital health-related data is being generated by institutions (e.g. CT and MRI scans, EHRs), wearables (e.g. Fitbits, smart contact lenses), embeddables (e.g. wifi enabled pacemakers, insulin pumps) and ingestibles (e.g. bluetooth-enabled smart pills), it’s clear that no single service provider will have the cognitive capacity to analyse and interpret the data flowing from patients at that scale.
As more and more of the data we use in healthcare is digitised, we’ll need algorithmic assistance to filter out and highlight what is important for our specific context (i.e. what does a physio need to know about, rather than what the nurse needs). There will obviously be a role for health professionals in designing and evaluating those algorithms but will we be forward-thinking enough to clearly describe those roles and to prepare future clinicians for them?
We know that AI could prove highly beneficial for radiologists by cutting down on read times and improving accuracy. In addition, AI could be a strong resource for mining large data sets for both individual patient care and global insights. But first, we must access the images.
Today’s traditional hardware, CDs, and PACS (picture archiving communications system) lock data deep inside them and prevent interoperability.
Source: Medical images: the only photos not in the cloud – AI Med
I’d never considered this before but it’s obviously true, and for good reason. Patient anonymity and privary are good reasons to lock down medical images. But it also means that we won’t be able to run the kinds of machine learning algorithms on that data, nor will we be able to compare data from different populations where the medical images sit on different servers in different countries and are regulated by different laws and policies.
If we want to see the kinds of progress being made in other areas of image classification, we may need to reconsider our current policies around sharing patient data. Of course we’ll need consent from patients, as well as a means of ensuring data transfer across systems. This second point alone would be worth pursuing anyway, as it may lead to a set of (open) standards for interoperability between different EHR systems.
As with all things related to machine learning, having access to high fidelity, well-labelled data is key. If we don’t make patient data accessible in some format or another we may find it hard to use AI-based systems in healthcare. This obviously assumes that we want AI-based systems in healthcare in the first place.