Abstract

The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity-because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient-physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call