Categories
AI

Resource: AI Blindspot – A discovery process for spotting unconscious biases and structural inequalities in AI systems.

AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities.

AI Blindspot: A discovery process for spotting unconscious biases and structural inequalities in AI systems. MIT Media Lab.

This is a good resource to help teams work through the different ways in which their plans around the use of AI may introduce problems into the system. All of these are relevant for health contexts. There are 3 main areas that the resource covers:

  1. Planning: Purpose, Representative data, Abusability, and Privacy.
  2. Building: Optimization criteria, Discrimination by proxy, and Explainability.
  3. Deploying: Generalization error, and Right to contest.

You can download the AI Blindspots cards to use as a handy reference for planning meetings, or even use them just to generate discussion.

Categories
AI education

We Need Transparency in Algorithms, But Too Much Can Backfire

The students had also been asked what grade they thought they would get, and it turned out that levels of trust in those students whose actual grades hit or exceeded that estimate were unaffected by transparency. But people whose expectations were violated – students who received lower scores than they expected – trusted the algorithm more when they got more of an explanation of how it worked. This was interesting for two reasons: it confirmed a human tendency to apply greater scrutiny to information when expectations are violated. And it showed that the distrust that might accompany negative or disappointing results can be alleviated if people believe that the underlying process is fair.

Source: We Need Transparency in Algorithms, But Too Much Can Backfire

This article uses the example of algorithmic grading of student work to discuss issues of trust and transparency. One of the findings I thought was a useful takeaway in this context is that full transparency may not be the goal, but that we should rather aim for medium transparency and only in situations where students’ expectations are not met. For example, a student who’s grade was lower than expected might need to be told something about how it was calculated. But when they got too much information it eroded trust in the algorithm completely. When students got the grade they expected then no transparency was needed at all i.e. they didn’t care how the grade was calculated.

For developers of algorithms, the article also provides a short summary of what explainable AI might look like. For example, without exposing the underlying source code, which in many cases is proprietary and holds commercial value for the company, explainable AI might simply identify the relationships between inputs and outcomes, highlight possible biases, and provide guidance that may help to address potential problems in the algorithm.