Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, its users, and the way in which both interact in the (socio-technical) health system.
The article goes on to list some of the guiding principles for the development of AI in healthcare, including the following:
- AI must be designed and built to meet safety standards that ensure it is fit for purpose and operates as intended.
- AI must be designed for the needs of those who will work with it, and fit their workflows.
- Humans must have the right to challenge an AI’s decision if they believe it to be in error.
- Humans should not direct AIs to perform beyond the bounds of their design or delegated authority.
- Humans should recognize that their own performance is altered when working with AI.
- If humans are responsible for an outcome, they should be obliged to remain vigilant, even after they have delegated tasks to an AI.
The principles listed above are only a very short summary. If you’re interested in the topic of ethical decision making in clinical practice, you should read the whole thing.