Abstract

One concern about the application of medical artificial intelligence (AI) regards the “black box” feature which can only be viewed in terms of its inputs and outputs, with no way to understand the AI's algorithm. This is problematic because patients, physicians, and even designers, do not understand why or how a treatment recommendation is produced by AI technologies. One view claims that the worry about black-box medicine is unreasonable because AI systems outperform human doctors in identifying the disease. Furthermore, under the medical AI-physician-patient model, the physician can undertake the responsibility of interpreting the medical AI's diagnosis. In this study, we focus on the potential harm caused by the unexplainability feature of medical AI and try to show that such possible harm is underestimated. We will seek to contribute to the literature from three aspects. First, we appealed to a thought experiment to show that although the medical AI systems perform better on accuracy, the harm caused by medical AI's misdiagnoses may be more serious than that caused by human doctors’ misdiagnoses in some cases. Second, in patient-centered medicine, physicians were obligated to provide adequate information to their patients in medical decision-making. However, the unexplainability feature of medical AI systems would limit the patient's autonomy. Last, we tried to illustrate the psychological and financial burdens that may be caused by the unexplainablity feature of medical AI systems, which seems to be ignored by the previous ethical discussions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call