The Organisation for Economic Co-operation and Development (OECD) has just released a list of recommendations to promote the development of AI that is “innovative and trustworthy and that respects human rights and democratic values”. The principles are meant to complement existing OECD standards around security, risk management and business practices, and could be seen as a response to concerns around the potential for AI systems to undermine democracy.
The principles were developed by a panel consisting of more than 50 experts from 20 countries, as well as leaders from business, civil society, academic and scientific communities. It should be noted that these principles are not legally binding and should be thought of as suggestions that might influence the decision-making of the stakeholders involved in AI development i.e. all of us. The OECD recognises that:
- AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;
- AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;
- And that, at the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;
- And that trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology [my emphasis], while limiting the risks associated with it;
The recommendations identify five complementary values-based principles for the responsible stewardship of trustworthy AI (while these principles are meant to be general, they’re clearly also appropriate in the more specific context of healthcare):
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
The OECD also provides five recommendations to governments:
- Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
- Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
- Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
- Empower people with the skills for AI and support workers for a fair transition.
- Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.
For a more detailed description of the principles, as well as the background and plans for follow-up and monitoring processes, see the OECD Legal Instrument describing the recommendations.