Abstract

Artificial intelligence (AI) is predicted to be a solution for improving healthcare, increasing efficiency, and saving time and recourses. A lack of ethical principles for the use of AI in practice has been highlighted by several stakeholders due to the recent attention given to it. Research has shown an urgent need for more knowledge regarding the ethical implications of AI applications in healthcare. However, fundamental ethical principles may not be sufficient to describe ethical concerns associated with implementing AI applications. The aim of this study is twofold, (1) to use the implementation of AI applications to predict patient mortality in emergency departments as a setting to explore healthcare professionals' perspectives on ethical issues in relation to ethical principles and (2) to develop a model to guide ethical considerations in AI implementation in healthcare based on ethical theory. Semi-structured interviews were conducted with 18 participants. The abductive approach used to analyze the empirical data consisted of four steps alternating between inductive and deductive analyses. Our findings provide an ethical model demonstrating the need to address six ethical principles (autonomy, beneficence, non-maleficence, justice, explicability, and professional governance) in relation to ethical theories defined as virtue, deontology, and consequentialism when AI applications are to be implemented in clinical practice. Ethical aspects of AI applications are broader than the prima facie principles of medical ethics and the principle of explicability. Ethical aspects thus need to be viewed from a broader perspective to cover different situations that healthcare professionals, in general, and physicians, in particular, may face when using AI applications in clinical practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call