Abstract

The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.

Highlights

  • Recent research has demonstrated the potential to create artificial intelligence-based health-care applications that can reach or exceed the performance of clinicians for specific tasks.[1]

  • The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of moral accountability and safety assurance worldwide have not yet adjusted

  • In this paper we focus on two implications of clinical decision-making that involves artificial intelligence: moral accountability for harm to patients; and safety assurance to protect patients against such harm

Read more

Summary

Introduction

Recent research has demonstrated the potential to create artificial intelligence-based health-care applications that can reach or exceed the performance of clinicians for specific tasks.[1]. The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of moral accountability and safety assurance worldwide have not yet adjusted. Our central thesis is that digital tools are challenging the standard clinical practices of assigning blame, as well of assuring safety. We use an example from an artificial intelligencebased clinical system developed for use in the treatment of sepsis. We discuss this system’s perceived and actual benefits and harms, and consider the moral accountability and safety assurance issues that arise from the perspective of both clinicians and patients. We conclude with practical suggestions for dealing with moral accountability and safety assurance in the use of artificial intelligence in health care

Moral accountability
Safety assurance
Example system
Potential benefits and harms
Clinician and patient perspectives
Findings
The way forward
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call