Comment: Facebook says it’s going to make it harder to access anti-vax misinformation

Facebook won’t go as far as banning pages that spread anti-vaccine messages…[but] would make them harder to find. It will do this by reducing their ranking and not including them as recommendations or predictions in search.

Firth. N. (2019). Facebook says it’s going to make it harder to access anti-vax misinformation. MIT Technology Review.

Of course this is a good thing, right? Facebook – already one of the most important ways that people get their information – is going to make it more difficult for readers to find information that opposes vaccination. With the recent outbreak of measles in the United States we need to do more to ensure that searches for “vaccination” don’t also surface results encouraging parents not to vaccinate their children.

But what happens when Facebook (or Google, or Microsoft, or Amazon) start making broader decisions about what information is credible, accurate or fake? That would actually be great if we could trust their algorithms. But trust requires that we’re allowed to see the algorithm (and also that we can understand it, which in most cases, we can’t). In this case, it’s a public health issue and most reasonable people would see that the decision is the “right” one. But when companies tweak their algorithms to privilege certain types of information over other types of information, then I think we need to be concerned. Today we agree with Facebook’s decision but how confident can we be that we’ll still agree tomorrow?

Also, vaccines are awesome.

Fairness matters: Promoting pride and respect with AI

We’re creating an open dataset that collects diverse statements from the LGBTIQ+ community, such as “I’m gay and I’m proud to be out” or “I’m a fit, happy lesbian that has just retired from a wonderful career” to help reclaim positive identity labels. These statements from the LGBTIQ+ community and their supporters will be made available in an open dataset, which coders, developers and technologists all over the world can use to help teach machine learning models how the LGBTIQ+ community speak about ourselves.

Source: Fairness matters: Promoting pride and respect with AI

It’s easy to say that algorithms are biased, because they are. It’s much harder to ask why they’re biased. They’re biased because of many reasons but one of the biggest contributors is that we simply don’t have diverse and inclusive data sets to train them on. Human bias and prejudice is reflected in our online interactions; they way we speak to each other on social media, the things we write about on blogs, the videos we watch on YouTube, the stories we share and promote. Project respect is an attempt to increase the set of inclusive and diverse training data for better and less biased machine learning.

Algorithms are biased because human beings are biased, and the ways that those biases are reflected back to us may be why we find them so offensive. Maybe we don’t like machine bias because of what it says about us.