Categories
AI clinical

Comment: Computer vision is far from solved.

You could argue that because these pictures are designed to fool AI, it’s not exactly a fair fight. But it’s surely better to understand the weaknesses of these systems before we put our trust in them.

Vincent, J. (2019). The mind-bending confusion of ‘hammer on a bed’ shows computer vision is far from solved. The Verge.

This is an important issue to be aware of…the published studies on how AI is vastly superior to human perception may be true only in very narrow, tightly controlled situations. If we’re not aware of that we may be willing to place too much trust in systems that are fundamentally biased or inaccurate when it comes to performance in the real world.

For example, consider decision-making in expert systems (something like IBMs Watson) where the system is trained on retrospective data, usually from places where they have a lot of data. This might translate into the system making suggestions for patient management based on what has been done in the past, in circumstances that are completely different to the current context. If I’m a family practitioner practising in rural South Africa, it may not be that useful to know what an expert oncologist in Boston would have done in a similar situation.

It’s unlikely that the management options provided by the system are feasible for implementation because of differences in people, culture, language, society, health systems, etc. But unless I know that the data my expert system was trained on is contextually flawed, I may simply go ahead and then have no idea why it fails. It’s important to test AI systems in situations where we know they’ll break before we roll them out in the real world.

Categories
AI clinical

Eagle-eyed machine learning algorithm outdoes human experts — ScienceDaily

“Human detection and identification is error-prone, inconsistent and inefficient. Perhaps most importantly, it’s not scalable,” says Morgan. “Newer imaging technologies are outstripping human capabilities to analyze the data we can produce.”

Source: Eagle-eyed machine learning algorithm outdoes human experts — ScienceDaily

The point here is that data is being generated faster than we can analyse and interpret it. Big data is not a storage problem, it’s an analysis problem. Yes, we’ve had large sets of data before (think, libraries) but no-one expected a human being to read through, and make sense of, all of it. Now that digital health-related data is being generated by institutions (e.g. CT and MRI scans, EHRs), wearables (e.g. Fitbits, smart contact lenses), embeddables (e.g. wifi enabled pacemakers, insulin pumps) and ingestibles (e.g. bluetooth-enabled smart pills), it’s clear that no single service provider will have the cognitive capacity to analyse and interpret the data flowing from patients at that scale.

As more and more of the data we use in healthcare is digitised, we’ll need algorithmic assistance to filter out and highlight what is important for our specific context (i.e. what does a physio need to know about, rather than what the nurse needs). There will obviously be a role for health professionals in designing and evaluating those algorithms but will we be forward-thinking enough to clearly describe those roles and to prepare future clinicians for them?