Abstract

Responsible AI is concerned with the design, implementation and use of ethical, transparent, and accountable AI technology in order to reduce biases, promote fairness, equality, and to help facilitate interpretability and explainability of outcomes, which are particularly pertinent in a healthcare context. However, the extant literature on health AI reveals significant issues regarding each of the areas of responsible AI, posing moral and ethical consequences. This is particularly concerning in a health context where lives are at stake and where there are significant sensitivities that are not as pertinent in other domains outside of health. This calls for a comprehensive analysis of health AI using responsible AI concepts as a structural lens. A systematic literature review supported our data collection and sampling procedure, the corresponding analysis, and extraction of research themes helped us provide an evidence-based foundation. We contribute with a systematic description and explanation of the intellectual structure of Responsible AI in digital health and develop an agenda for future research.

Highlights

  • Responsible Artificial Intelligence (AI) is an emerging area that investigates the ethics of AI to understand the moral responsibility in emerging technology (Tigard, 2020)

  • We focus on the ethical concerns emerging from AI in digital health based on the six types developed by Mittelstadt et al (2016), which contribute to developing a responsible AI for healthcare (Dignum, 2019)

  • We investigate ethical concerns emerging from AI in healthcare because it is a key element for Responsible AI, which is engaged with making a proper use of the exchanged information across healthcare organizations

Read more

Summary

Introduction

Responsible Artificial Intelligence (AI) is an emerging area that investigates the ethics of AI to understand the moral responsibility in emerging technology (Tigard, 2020). The need for responsible AI has stemmed from a limited understanding of important issues that emerge with the use of such technologies. These concerns influence the use of AI in healthcare, which integrates and learns from large datasets of clinical data, to support diagnosis, clinical decision making, and personalized medicine. The use of AI often results in decisions and actions that have moral consequences, undermine ethical principles, and diminish people’s rights and dignity (Martin, 2019b). Recent empirical articles highlight how deploying AI is coupled with significant ethical challenges (Floridi & Taddeo, 2016), as the “walking data generators” (individuals/patients) are often unaware of how their medical

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.