Abstract

Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.

Highlights

  • According to a recent literature review (Bærøe et al, 2020), Artificial Intelligence (AI) in healthcare is already being used: 1) in the assessment of the risk of disease onset and in estimating treatment success; 2) in an attempt to manage or alleviate complications; 3) to assist with patient care during the active treatment or procedure phase; 4) in research aimed at elucidating the pathology or mechanism of and/or the ideal treatment for a disease.For all of its potential, the use of AI in healthcare brings major risks and potential unintended harm

  • The authors concluded that “these findings suggest that while a machine learning model recognized a significantly greater number of out-of-hospital cardiac arrests than dispatchers alone, this did not translate into improved cardiac arrest recognition by dispatchers” (Blomberg et al, 2021)

  • There is a tension between the conclusions from the retrospective study (Blomberg et al, 2019), indicating that the machine learning (ML) framework performed better than emergency medical dispatchers for identifying of-hospital cardiac arrest (OHCA) in emergency phone calls—and with the expectation that the ML could play an important role as a decision support tool for emergency medical dispatchers, and the results of a randomized control trial performed later (September 2018–January 2020) (Blomberg et al, 2021), which did not show any benefits in using the AI system in practice

Read more

Summary

Introduction

According to a recent literature review (Bærøe et al, 2020), Artificial Intelligence (AI) in healthcare is already being used: 1) in the assessment of the risk of disease onset and in estimating treatment success (before initiation); 2) in an attempt to manage or alleviate complications; 3) to assist with patient care during the active treatment or procedure phase; 4) in research aimed at elucidating the pathology or mechanism of and/or the ideal treatment for a disease. For all of its potential, the use of AI in healthcare brings major risks and potential unintended harm. While there are some first uses of AI in healthcare, there is still a lack of many approved and validated products. Given that “the artificial intelligence industry is driven by strong economic and political interests,” the need for trustworthy adoption of AI in healthcare is crucial (Bærøe et al, 2020). AI has the potential to “greatly improve the delivery of healthcare and other services that advance well-being, if it is validated by the authorities, accepted and supported by the Healthcare Professionals and Healthcare Organizations and trusted by patients” (MedTech Europe, 2019; Deloitte, 2020)

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call