Abstract

It is widely accepted that explainability is a requirement for the ethical use of artificial intelligence (AI) in health care. I challenge this Explainability Imperative (EI) by considering the following question: does the use of epistemically opaque medical AI systems violate existing legal standards for informed consent? If yes, and if the failure to meet such standards can be attributed to epistemic opacity, then explainability is a requirement for AI in healthcare. If not, then based on at least one metric of ethical medical practice (informed consent), explainability is not required for the ethical use of AI in healthcare. First, I show that the use of epistemically opaque AI applications is compatible with meeting accepted legal criteria for informed consent. Second, I argue that human experts are also black boxes with respect to the criteria by which they arrive at a diagnosis. Human experts can nonetheless meet established requirements for informed consent. I conclude that the use of black-box AI systems does not violate patients’ rights to informed consent, and thus, with respect to informed consent, explainability is not required for medical AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call