Michael Rowe

Trying to get better at getting better

Resource: AI Blindspot – A discovery process for spotting unconscious biases and structural inequalities in AI systems.

AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities.

AI Blindspot: A discovery process for spotting unconscious biases and structural inequalities in AI systems. MIT Media Lab.

This is a good resource to help teams work through the different ways in which their plans around the use of AI may introduce problems into the system. All of these are relevant for health contexts. There are 3 main areas that the resource covers:

  1. Planning: Purpose, Representative data, Abusability, and Privacy.
  2. Building: Optimization criteria, Discrimination by proxy, and Explainability.
  3. Deploying: Generalization error, and Right to contest.

You can download the AI Blindspots cards to use as a handy reference for planning meetings, or even use them just to generate discussion.


Share this


Discover more from Michael Rowe

Subscribe to get the latest posts to your email.