AI at Google: Our principles

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles

Source: AI at Google: Our principles

This list isn’t a bad start if you’re looking for guidance when it comes to AI systems development, and it’s a pretty good substitute for what is currently lacking in the development of healthcare AI. For example, you could easily map these principles onto the Principle ethics (beneficence, non-maleficence, justice, autonomy), which many consider to be the cornerstone of professional ethical practice.

Note: You could argue that this is a self-serving list, published to support Google’s position as a company committed to doing the Right Thing (since “Don’t be evil” was removed from their code of conduct). However, Google’s recent decision not to renew a lucrative contract with the Pentagon says a lot about their willingness to at least try and uphold their position. Regardless, even taking the list at face value is a useful approach to thinking about how to develop AI-based systems.