ABSTRACT AI promises to address health services’ quality and cost challenges, however, errors and bias in medical devices decisions pose threats to human health and life. This has also led to the lack of trust in AI medical devices among clinicians and patients. The goal of this article is to assess whether AI explainability principle established in numerous ethical AI frameworks can help address these and other challenges posed by AI medical devices. We first define the AI explainability principle, delineate it from the AI transparency principle, and examine which stakeholders in healthcare sector would need AI to be explainable and for what purpose. Second, we analyze whether explainable AI in healthcare is capable of achieving its intended goals. Finally, we examine robust regulatory approval framework as an alternative – and a more suitable – way in addressing challenges caused by black-box AI.
Read full abstract