Artificial Intelligence has now taken a full-fledged role in healthcare and has started driving innovations not only in diagnostics and treatment planning but also in patient monitoring and operational efficiency. This will enable complex medical data analysis, extracting patterns and insights that no human is capable of. However, most of these models are per se opaque-that is, the so-called "black-box" problem-there are still great challenges in areas such as transparency, trust, and ethical applications in a clinical setting. This lack of interpretability can stand in the way of acceptance or integration for AI technologies when issues of understanding and accountability are relevant. Explainable AI solves these problems by making real artificial intelligence decisions understand the decisions made to humans. XAI techniques offer well-understandable and interpretable explanations of the models with minimum degradations in performance. This review article explains in detail the critical role of XAI in healthcare, underpinning how this field can bring more transparency into AI applications. We explain some of the current methods of XAI: model-agnostic techniques like LIME and SHAP, interpretable models relating to decision trees and linear models, and visualization techniques like saliency maps and mechanisms of attention.
Read full abstract