Health professionals’ role in the banning of lethal autonomous weapons

This is a great episode from the Future of Life Institute, on the topic of banning lethal autonomous weapons. You may wonder, what on earth do lethal autonomous weapons have to do with health professionals? I wondered the same thing until I was reminded of the role that physios play in the rehabilitation of landmine victims. Landmines are less sophisticated than the next generation of lethal autonomous weapons, which means, in part, that they’re less able to distinguish between targets.

Weaponised drones, for example, will not only identify and engage targets based on age, gender, location, dress code, etc. but will also be able to reprioritise objectives independent of any human operator. In addition, unlike building a landmine, which (probably) requires some specialised training, weaponised drones will be produced en masse at low cost, fitted with commoditised hardware, will be programmable, and can be deployed at distance from the target. These are tools of mass destruction for the consumer market, enabling a few to create immense harm to many.

The video below gives an example of how 100s of drones can be coordinated by a single person. If these drones were fitted with explosives instead of flashing lights, you start to get a sense of how much damage they could do in a crowded space and how difficult it would be to stop them.

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.

The American Medical Association has published a position statement on the role of artificial intelligence in augmenting the work of medical professionals but no professional organisation has yet to take a stance on banning autonomous weapons. It seems odd that we recognise the significance of AI for enhancing healthcare but not apparently, it’s potential for increasing human suffering. The medical and health professional community should not only advocate for the use of AI to improve health but also to ensure it is not used for autonomous decision-making in armed conflict.

More reading and resources at https://futureoflife.org/2019/04/02/fli-podcast-why-ban-lethal-autonomous-weapons/.

Comment: Nvidia AI Turns Doodles Into Realistic Landscapes

Nvidia has shown that AI can use a simple representation of a landscape to render a photorealistic vista that doesn’t exist anywhere in the real world… It has just three tools: a paint bucket, a pen, and a paintbrush. After selecting your tool, you click on a material type at the bottom of the screen. Material types include things like tree, river, hill, mountain, rock, and sky. The organization of materials in the sketch tells the software what each part of the doodle is supposed to represent, and it generates a realistic version of it in real time.

Whitwam, R. (2019). Nvidia AI Turns Doodles Into Realistic Landscapes. Extreme Tech.

You may be tempted think of this as substitution, where the algorithm looks at the shape you draw, notes the “material” it represents (e.g. a mountain) and then matches it to an image of that thing that already exists. But that’s not what’s happening here. The AI is creating a completely new version of what you’ve specified, based on what it knows that thing to look like.

So when you say that this shape is a mountain, it has a general concept of “mountain”, which it uses to create something new. If it were a simple substitution, the algorithm would need you to draw a shape that corresponds to an existing feature of the world. I suppose you could argue that this isn’t real creativity but I think you’d be hard-pressed to say that it’s not moving in that direction. The problem (IMO) with every argument saying that AI is not creative, is that these things only ever get better. It may not conform to the definition of creativity that you’re using today, but tomorrow it will.

What does scholarship sound like?

Creative work is scholarly work

The Specialist Committee recognises the importance of both formal academic research and creative outputs for the research cultures in many departments, as well as for individual researchers; it thus aims to give equal value to theoretical/empirical research (i.e. historical, theoretical, analytic, sociological, economic, etc. studies from an arts perspective) and creative work (i.e. in cases where the output is the result of a demonstrable process of investigation through the processes of making art.); the latter category of outputs is treated as fully equivalent to other types of research output, but in all cases credit is only given to those outputs which demonstrate quality and have a potential for impact and longevity.

The South African National Research Foundation has recently shared guidelines for the recognition of creative scholarly outputs, which serves to broaden the concept of what kind of work can be regarded – and importantly, recognised – as “scholarly”. The guidelines suggest that the creative work could include (among others):

  • Non-conventional academic activities related to creative work and performance: Catalogues, programmes, and other supporting documentation describing the results of arts research in combination with the works themselves;
  • In Drama and theatre: scripts or other texts for performances and the direction of and design (lighting, sound, sets, costumes, properties, etc.) for live presentations as well as for films, videos and other types of media presentation; this also applies to any other non-textual public output (e.g. puppetry, animated films, etc.), provided they can be shown to have entered the public domain;

I’m going to talk about podcasts as scholarly outputs because I’m currently involved in three podcast projects; In Beta (conversations about physiotherapy education), SAAHE health professions educators (conversations about educational research in the health professions), and a new project to document the history of the physiotherapy department at the University of the Western Cape.

These podcasts take up a lot of time; time that I’m not spending writing the articles that are the primary form of intellectual capital in academia and I wondered, in the light of the new guidelines from the NRF, if a podcast could be considered to be a scholarly output. There are other reasons for why we may want to consider recognising podcasts as scholarly outputs:

  1. They increase access for academics who are doing interesting work but who, for legitimate reasons, may not be willing to write an academic paper.
  2. They increase diversity in the academic domain because they can be (should be?) published in the language of preference of the hosts.
  3. They reduce the dominance of the PDF for knowledge distribution, which could only be a good thing.
  4. Conversations among academics is a legitimate form of knowledge creation, as new ideas emerge from the interactions between people (like, for example, in a focus group discussion).
  5. Podcasts – if they are well-produced – are likely to have a wider audience than academic papers.
  6. Audio gives an audience another layer of interesting-ness when compared to reading a PDF.
  7. Academic podcasts may make scholarship less boring (although, to be honest, we’re talking about academics, so I’m not convinced with this one).

What do we mean by “scholarship”?

Most people think of scholarly work as the research article (and probably the conference presentation) but there’s no reason that the article/PDF should remain the primary form of recognised scholarly output. It also requires that anyone wanting to contribute to a scholarly conversation must learn the following:

  • “Academic writing” – the specific grammar and syntax we expect from our writers.
  • Article structure – usually, the IMRAD format (Introduction, Methods, Results and Discussion).
  • Journals – where to submit, who is most likely to publish, what journals cater for which audiences.
  • Research process – I’m a big fan of the scientific method but sometimes it’s enough for a new idea to be shared without it first having to be shown to be “true”.

Instead of expecting people to first learn the traditions and formal structures that we’ve accepted as the baseline reality for sharing scholarly work, what if we just asked what scholarship is? Instead of defining “scholarship” as “research paper/conference presentation”, what if we started with what scholarship is considered to be and then see what maps onto that? From Wikipedia:

The scholarly method or scholarship is the body of principles and practices used by scholars to make their claims about the subject as valid and trustworthy as possible and to make them known to the scholarly public… Scholarship…is creative, can be documented, can be replicated or elaborated, and is peer-reviewed.

So there’s nothing about publishing PDFs in journals as part of this definition of scholarship. What about the practice of doing scholarly work? I’m going to use Boyer’s model of scholarship, not because it’s the best but because it is relatively common and not very controversial. Boyer includes four categories of scholarly work (note that this is not a series of progressions that one has to move through in order to reach the last category…each category is a form of scholarship on its own):

  • Scholarship of discovery: what is usually considered to be basic research or the search for new knowledge.
  • Scholarship of integration: where we aim to give meaning to isolated facts that consider them in context; it aims to ask what the findings of discovery mean.
  • Scholarship of application: the use of new knowledge to solve problems that we care about.
  • Scholarship of teaching: the examination of how teaching new knowledge can both educate motivate those in the discipline; it is bout sharing what is learned.

Here are each of Boyer’s categories with reference to podcasts:

  • Discovery (advancing knowledge): Can we argue that knowledge can be advanced through conversation? Is there something Gestalt in a conversation where a new whole can be an emergent property of the constituent parts? How is a podcast conversation any different to a focus group discussion where the cohort is a sample with specific characteristics of interest?
  • Integration (synthesis of knowledge): Can the editing and production of a podcast, using the conversation as the raw data, be integrated with other knowledge in order to add new levels of explanation and critique? This could either be in the audio file or as show notes. Could podcast guests be from different disciplines, exploring a topic from different perspectives?
  • Application/engagement (applied knowledge): Can we use emergent knowledge from the podcast to do something new in the world? Can we take what is learned from the initial conversation, which may have been modified and integrated with other forms of knowledge (in multiple formats e.g. text, images, video), and apply it to a problem that we care about?
  • Teaching (openly shared knowledge): Can we, after listening to a podcast and applied what we learned, share what was done, as well as the result, with others in order that the process (methods) and outcomes (results) can be evaluated by our peers?

This may not be a definitive conclusion to the question of whether podcasts could be regarded as scholarly work but at the very least, it suggests that it’s something we could consider. If you accept that a podcast might be regarded as scholarly we can then ask how we might go about formally recognising it as such.

Workflow to distribute scholarly work

I’m going to use an academic, peer-reviewed, traditional journal (or at least, the principle of one) to explore a workflow that we can use to get a sense of how a podcast could be formally recognised as scholarly work. We first need to note that a journal has two primary functions:

  1. Accreditation, which is usually a result of the journals peer review process, and their brand/history/legacy. The New England Journal of Medicine is a recognised “accreditor” of scholarly work, not because there is anything special about the journal but simply because it is the New England Journal of Medicine. Their reputation is enough for us to trust them when they say that the ideas presented in a piece of work have been tested through peer review and has not been found wanting.
  2. Distribution, which in the past meant printing those ideas on paper and literally shipping them around the world. Today, this distribution function has changed to Discoverability; the journal does what it can to make sure your article can be found by search engines, and if you’re the New England Journal of Medicine you don’t need to do much because Google will do your quality signalling for you by surfacing your articles above others. Theefore, ournals host content and try to increase the chances that we can find it, and the distribution function has largely been taken over by us (because we share articles on behalf of the journals).

By separating out the functions of a journal we can see that it’s possible for a journal to accredit work that it does not necessarily have to host itself. We could have a journal that is asked to accredit a piece of work i.e. signal to readers (or in our case, listeners) that the work has passed some set of criteria that we use to describe it as “scholarly”.

What might this workflow look like? Since I’m trying to show how podcasts could be accredited within the constraints of the existing system of journal publications, I’m going to stick to a traditional process as closely as possible, even though I think that this makes the process unnecessarily complicated, especially when you think about what needs to happen following the peer review. Here is what I think the accreditation process could look like:

  1. Create a podcast episode (this is basically a FGD) on a topic of interest where guests discuss a question or a problem that their community of peers recognises as valid. This could be done by a call to the community for topics of interest.
  2. Edit the podcast, including additional resources and comments as show notes. The podcast creators could even include further comments and analysis, either before, during or after the initial recorded conversation. The audio includes the raw data (the recorded conversation), real-time analysis and critique by participants, discussion of potential applications of the emergent knowledge, and conclusion (maybe via post-recording reflection and analysis).
  3. Publish the episode on any podcast-hosting platform. The episode is now in the public domain.
  4. Submit a link to the episode to a journal, which embeds the podcast episode as a post (“article”) along with a short description of what it includes (like an abstract), a description of the process of creation (like the methods), the outcome of the conversation (like a conclusion), and a list of additional reading (like a reference list).
  5. The journal begins the process of accrediting the podcast by allocating peer reviewers, whose reviews are published alongside the embedded podcast in the journal.
  6. Reviewers review the “methods”, “conclusions”, “references” and knowledge claims of the podcast guests, add comments to the post, and highlight the limitations of the episode. The show notes include a description of the process, participants, additional readings, DOI, etc. This could be where the process ends; the journal has used peer review to assign a measure of “quality” to the episode and does not attempt to make a judgement on “value” (which is what journals do when they reject submissions). It is left to the listener to decide if the podcast has value for them.
  7. The following points are included for completeness as they follow a traditional iterative process following peer review. I don’t think these steps are necessary but are only included to map the workflow onto a process that most authors will be familiar with:
    1. The podcast creators make some changes to the audio file, perhaps by including new analysis and comments in the episode, or maybe by adding new information to the textual component of the episode (i.e. the show notes).
    2. The new episode is released. This re-publication of the episode would need to be classified as an entirely different version since the original episode would have been downloaded and shared to networks. An updated version would, therefore, need a new URL, a new page on the podcast hosting service, etc.

In the example workflow above, the journal never hosts the audio file and does not “publish” the podcast. It includes an embedded version of the episode, the show notes (which include the problem under discussion, the participants and their bios, an analysis of the conversation, and a list of references), as well as the full peer reviews. Readers/listeners then decide on the “importance” of the episode and whether or not to assign value to it. In other words, the readers/listeners decide what work is valuable, rather than the peer reviewers or the journal.

In summary, I’ve tried to describe why podcasts are potentially a useful format for creating and sharing the production of new knowledge, presented a framework for determining if a podcast could be considered to be scholarly, and described the workflow and some practical implications of an accreditation process using a traditional journal.

Summary: Ten simple rules for structuring papers

Good scientific writing is essential to career development and to the progress of science. A well-structured manuscript allows readers and reviewers to get excited about the subject matter, to understand and verify the paper’s contributions, and to integrate these contributions into a broader context. However, many scientists struggle with producing high-quality manuscripts and are typically untrained in paper writing. Focusing on how readers consume information, we present a set of ten simple rules to help you communicate the main idea of your paper. These rules are designed to make your paper more influential and the process of writing more efficient and pleasurable.

Mensh, B. & Kording, K. (2017). Ten simple rules for structuring papers. PLoS Computational Biology, 13(9): e1005619.

Thank you to Guillaume Christe for pointing to this paper on Twitter. While I’m not convinced that the title should refer to “rules” I thought it was a useful guide to thinking about article structure. I’m also aware that most people won’t have time to read the whole thing so I’m posting the summary notes I made while reading it. Having said that, I think whole paper (link here) is definitely worth reading. And, if you like this you may also like this table of suggestions from Josh Bernoff’s Writing without bullshit. OK, on with the summary.

First, there’s this helpful table from the authors as a very brief overview.

https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1005619.t001

Principles (Rules 1–4)

Rule 1: Focus your paper on a central contribution, which you communicate in the title. Adding more ideas may be necessary but they make it harder for the reader to remember what the paper is about. If the title doesn’t make a reader want to read the paper, all the work is for nothing. A focused title can also help the author to stay on track.

Rule 2: Write for flesh-and-blood human beings who do not know your work. You are the least qualified person to judge your writing from the perspective of the reader. Design the paper for someone who must first be made to care about your topic, and then who wants to understand your answer with minimal effort. This is not about showing how clever you are.

Rule 3: Stick to the context-content-conclusion (C-C-C) scheme. Aim to write “popular” (i.e. memorable and re-tellable) stories that have a clear beginning, middle and end. While there are many ways to tell stories, each of which engages different readers, this structure is likely to be appropriate for most. Also, the structure of the paper need not be chronological.

Rule 4: Optimize your logical flow by avoiding zig-zag and using parallelism. Only the central idea of a paper should be presented in multiple places. Group similar ideas together to avoid moving the reader’s attention around.

The components of a paper (Rules 5–8)

Rule 5: Tell a complete story in the abstract. Considering that the abstract may be (is probably) the only part of the paper that is read, it should tell the whole story. Ensure that the reader has enough context (i.e. background/introduction) to interpret the results). Avoid writing the abstract as an afterthought, as it often requires many iterations to do it’s job well.

Rule 6: Communicate why the paper matters in the introduction. The purpose of the introduction is to describe the gap that the study aims to fill. It should not include a broad literature review but rather narrow the focus of attention to the problem under consideration.

Rule 7: Deliver the results as a sequence of statements, supported by figures, that connect logically to support the central contribution. While there are different ways of presenting results, often discipline-specific, the main purpose is to convince the reader that the central claim is supported by data and argument. The raw data should be presented alongside the interpretation in order to allow the reader to reach their own conclusions (hopefully, these are aligned with the intent of the paper).

Rule 8: Discuss how the gap was filled, the limitations of the interpretation, and the relevance to the field. The discussion explains how the findings have filled the gap/answered the question that was posed in the introduction. If often includes limitations and suggestions for future research.

Process (Rules 9 and 10)

Rule 9: Allocate time where it matters: Title, abstract, figures, and outlining. Spend time on areas that demonstrate the central theme and logic of the argument. The methods section is often ignored, so budget time accordingly. Outline the argument throughout the paper by writing one informal sentence for each planned paragraph.

Rule 10: Get feedback to reduce, reuse, and recycle the story. Try not to get too attached to the writing, as it may be more efficient to delete whole sections and start again, than to proceed by iterative editing. Try to describe the entire paper in a few sentences, which help to identify the weak areas. Aim to get critical feedback from multiple readers with different backgrounds.


And finally, here’s a great figure to show how each section can be structured using the guidelines in the article.

https://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1005619.g001

Link: How AI Will Rewire Us

Radical innovations have previously transformed the way humans live together. The advent of cities…meant a less nomadic existence and a higher population density. More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.

As consequential as these innovations were, however, they did not change the fundamental aspects of human behaviour: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching.

But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.

Christakis, N. (2019). How AI Will Rewire Us. The Atlantic.

The author provides a series of experimental outcomes showing how, depending on the nature of the interacting AI, human beings can be made to respond differently to teammates and collaborators. For example, having a bot make minor errors and then apologise can nudge people towards being more compassionate with each other. This should give us pause as we consider how we want to design the systems that we’ll soon be working with.

For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship.


See also: Comment: In competition, people get discouraged by competent robots.

10 recommendations for the ethical use of AI

In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.

  1. Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  2. Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  3. Privacy: Users should be able to easily opt out of data collection.
  4. Diversity: A.I. technology should be developed by inherently diverse teams.
  5. Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  6. Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  7. Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  8. Collective governance: Companies should work together to self-regulate the industry.
  9. Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  10. “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

The list of recommendations seems reasonable enough on the surface, although I wonder how practical they are given the business models of the companies most active in developing AI-based systems. As long as Google, Microsoft, Facebook, etc. are generating the bulk of their revenue from advertising that’s powered by the data we give them, they have little incentive to be transparent, to disclose, to be regulated, etc. If we opt our data out of the AI training pool, the AI is more susceptible to bias and less useful/accurate, so having more data is usually better for algorithm development. And having internal processes to build trust? That seems odd.

However, even though it’s easy to find issues with all of these recommendations it doesn’t mean that they’re not useful. The more of these kinds of conversations we have, the more likely it is that we’ll figure out a way to have AI that positively influences society.

Comment: In competition, people get discouraged by competent robots

After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle.

This is worth noting since it seems increasingly likely that we’ll soon be working, not only with more competent robots but also with more competent software. There are already concerns around how clinicians will respond to the recommendations of clinical decision-support systems, especially when those systems make suggestions that are at odds with the clinician’s intuition.

Paradoxically, the effect may be even worse with expert clinicians who may not always be able to explain their decision-making. Novices, who use more analytical frameworks (or even basic algorithms like, IF this, THEN that) may find it easier to modify their decisions because their reasoning is more “visible” (System 2). Experts, who rely more on subconscious pattern recognition (System 1), may be less able to identify where in their reasoning process they were victim to confounders like confirmation or availability bia, and so less likely to modify their decisions.

It seems really clear that we need to start thinking about how we’re going to prepare current and future clinicians for the arrival of intelligent agents in the clinical context. If we start disregarding the recommendations of clinical decision support systems, not because they produce errors in judgement but because we simply don’t like them, then there’s a strong case to be made that it is the human that we cannot trust.


Contrast this with automation bias, which is the tendency to give more credence to decisions made by machines because of a misplaced notion that algorithms are simply more trustworthy than people.

Comment: Why AI is a threat to democracy—and what we can do to stop it

The developmental track of AI is a problem, and every one of us has a stake. You, me, my dad, my next-door neighbor, the guy at the Starbucks that I’m walking past right now. So what should everyday people do? Be more aware of who’s using your data and how. Take a few minutes to read work written by smart people and spend a couple minutes to figure out what it is we’re really talking about. Before you sign your life away and start sharing photos of your children, do that in an informed manner. If you’re okay with what it implies and what it could mean later on, fine, but at least have that knowledge first.

Hao, K. (2019). Why AI is a threat to democracy—and what we can do to stop it. MIT Technology Review.

I agree that we all have a stake in the outcomes of the introduction of AI-based systems, which means that we all have a responsibility in helping to shape it. While most of us can’t be involved in writing code for these systems, we can all be more intentional about what data we provide to companies working on artificial intelligence and how they use that data (on a related note, have you ever wondered just how much data is being collected by Google, for example?). Here are some of the choices I’ve made about the software that I use most frequently:

  • Mobile operating system: I run LineageOS on my phone and tablet, which is based on Android but is modified so that the data on the phone stays on the phone i.e. is not reported back to Google.
  • Desktop/laptop operating system: I’ve used various Ubuntu Linux distributions since 2004, not only because Linux really is a better OS (faster, cheaper, more secure, etc.) but because open-source software is more trustworthy.
  • Browser: I switched from Chrome to Firefox with the release of Quantum, which saw Firefox catch up in performance metrics. With privacy as the default design consideration, it was an easy move to make. You should just switch to Firefox.
  • Email: I’ve looked around – a lot – and can’t find an email provider to replace Gmail. I use various front-ends to manage my email on different devices but that doesn’t get me away from the fact that Google still processes all of my emails on the back-end. I could pay for my email service provider – and there do seem to be good options – but then I’d be paying for email.
  • Search engine: I moved from Google Search to DuckDuckGo about a year ago and can’t say that I miss Google Search all that much. Every now and again I do find that I have to go to Google, especially for images.
  • Photo storage: Again, I’ve looked around for alternatives but the combination of the free service, convenience (automatic upload of photos taken on my phone), unlimited storage (for lower res copies) and the image recognition features built into Google Photos make this very difficult to move away from.
  • To do list: I’ve used Todoist and Any.do on and off for years but eventually moved to Todo.txt because I wanted to have more control over the things that I use on a daily basis. I like the fact that my work is stored in a text file and will be backwards compatible forever.
  • Note taking: I use a combination of Simplenote and Qownnotes for my notes. Simplenote is the equivalent of sticky notes (short-term notes that I make on my phone and delete after acting on them), and Qownnotes is for long-form note-taking and writing that stores notes as text files. Again, I want to control my data and these apps give me that control along with all of the features that I care about.
  • Maps: Google Maps is without equal and is so far ahead of anyone else that it’s very difficult to move away from. However, I’ve also used Here We Go on and off and it’s not bad for simple directions.

From the list above you can see that I pay attention to how my data is stored, shared and used, and that privacy is important to me. I’m not unsophisticated in my use of technology and I still can’t get away from Google for email, photos, and maps, arguably the most important data gathering services that the company provides. Maybe there’s something that I’m missing out but companies like Google, Facebook, Amazon and Microsoft are so entangled in everything that we care about, I really don’t see a way to avoid using their products. The suggestion that users should be more careful about what data they share, and who they share it with, is a useful thought experiment but the practical reality is that it would very difficult indeed to avoid these companies altogether.

Google isn’t only problem. See what Facebook knows about you.

Comment: Facebook says it’s going to make it harder to access anti-vax misinformation

Facebook won’t go as far as banning pages that spread anti-vaccine messages…[but] would make them harder to find. It will do this by reducing their ranking and not including them as recommendations or predictions in search.

Firth. N. (2019). Facebook says it’s going to make it harder to access anti-vax misinformation. MIT Technology Review.

Of course this is a good thing, right? Facebook – already one of the most important ways that people get their information – is going to make it more difficult for readers to find information that opposes vaccination. With the recent outbreak of measles in the United States we need to do more to ensure that searches for “vaccination” don’t also surface results encouraging parents not to vaccinate their children.

But what happens when Facebook (or Google, or Microsoft, or Amazon) start making broader decisions about what information is credible, accurate or fake? That would actually be great if we could trust their algorithms. But trust requires that we’re allowed to see the algorithm (and also that we can understand it, which in most cases, we can’t). In this case, it’s a public health issue and most reasonable people would see that the decision is the “right” one. But when companies tweak their algorithms to privilege certain types of information over other types of information, then I think we need to be concerned. Today we agree with Facebook’s decision but how confident can we be that we’ll still agree tomorrow?

Also, vaccines are awesome.