10 recommendations for the ethical use of AI

In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.

  1. Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  2. Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  3. Privacy: Users should be able to easily opt out of data collection.
  4. Diversity: A.I. technology should be developed by inherently diverse teams.
  5. Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  6. Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  7. Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  8. Collective governance: Companies should work together to self-regulate the industry.
  9. Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  10. “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

The list of recommendations seems reasonable enough on the surface, although I wonder how practical they are given the business models of the companies most active in developing AI-based systems. As long as Google, Microsoft, Facebook, etc. are generating the bulk of their revenue from advertising that’s powered by the data we give them, they have little incentive to be transparent, to disclose, to be regulated, etc. If we opt our data out of the AI training pool, the AI is more susceptible to bias and less useful/accurate, so having more data is usually better for algorithm development. And having internal processes to build trust? That seems odd.

However, even though it’s easy to find issues with all of these recommendations it doesn’t mean that they’re not useful. The more of these kinds of conversations we have, the more likely it is that we’ll figure out a way to have AI that positively influences society.