Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the possibility of malfunctioning AI systems, practitioners’ inability to check the correctness of their outputs, and the high stakes of such cases, the knowledge of medical practitioners is indeed undermined. They are lucky to form true beliefs based on the AI systems’ outputs, and knowledge is incompatible with luck. We supplement this claim with a specific version of the safety condition on knowledge, Safety*. We argue that, relative to the perspective of the medical doctor in our example case, his relevant beliefs could easily be false, and this despite his evidence that the AI system functions reliably. Assuming that Safety* is necessary for knowledge, the practitioner therefore doesn’t know. We address three objections to our proposal before turning to practical suggestions for improving the epistemic situation of medical doctors.
Read full abstract