Categories
AI clinical

Comment: Computer vision is far from solved.

You could argue that because these pictures are designed to fool AI, it’s not exactly a fair fight. But it’s surely better to understand the weaknesses of these systems before we put our trust in them.

Vincent, J. (2019). The mind-bending confusion of ‘hammer on a bed’ shows computer vision is far from solved. The Verge.

This is an important issue to be aware of…the published studies on how AI is vastly superior to human perception may be true only in very narrow, tightly controlled situations. If we’re not aware of that we may be willing to place too much trust in systems that are fundamentally biased or inaccurate when it comes to performance in the real world.

For example, consider decision-making in expert systems (something like IBMs Watson) where the system is trained on retrospective data, usually from places where they have a lot of data. This might translate into the system making suggestions for patient management based on what has been done in the past, in circumstances that are completely different to the current context. If I’m a family practitioner practising in rural South Africa, it may not be that useful to know what an expert oncologist in Boston would have done in a similar situation.

It’s unlikely that the management options provided by the system are feasible for implementation because of differences in people, culture, language, society, health systems, etc. But unless I know that the data my expert system was trained on is contextually flawed, I may simply go ahead and then have no idea why it fails. It’s important to test AI systems in situations where we know they’ll break before we roll them out in the real world.