Abstract

Artificial intelligence (AI) is increasingly used in health care to improve diagnostics and treatment. Decision-making tools intended to help professionals in diagnostic processes are developed in a variety of medical fields. Despite the imagined benefits, AI in health care is contested. Scholars point to ethical and social issues related to the development, implementation, and use of AI in diagnostics. Here, we investigate how three relevant groups construct ethical challenges with AI decision-making tools in prostate cancer (PCa) diagnostics: scientists developing AI decision support tools for interpreting MRI scans for PCa, medical doctors working with PCa and PCa patients. This qualitative study is based on participant observation and interviews with the abovementioned actors. The analysis focuses on how each group draws on their understanding of 'good health care' when discussing ethical challenges, and how they mobilise different registers of valuing in this process. Our theoretical approach is inspired by scholarship on evaluation and justification. We demonstrate how ethical challenges in this area are conceptualised, weighted and negotiated among these participants as processes of valuing good health care and compare their perspectives.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.