Lung cancer is a leading cause of cancer death worldwide. The survival rate is generally higher when this disease is detected in its early stages. Advances in artificial intelligence (AI) have enabled the development of decision support systems that help physicians diagnose diseases. However, these systems often provide final predictions without clarifying how those decisions are reached, raising concerns about trust and adaptation in life-threatening diseases. To address these issues, this study proposes an explainable case-based reasoning (XCBR) approach that considers both physicians' tendency to base their decisions on past cases and the case complexity in its predictions and explanations. The proposed XCBR is enhanced with naïve Bayes (NB) and multilayer perceptron (MLP) classifiers which are processed hierarchically: when the NB deems its predictions to be unlikely, the MLP classifier is employed to verify or update the predictions. This approach incorporates Shapley additive explanations values to elucidate the solutions offered by the MLP. Furthermore, it utilizes the Harris hawks optimization algorithm for feature selection and feature weighting. The proposed XCBR achieved high accuracies of 94.47% and 100% on two different datasets, demonstrating its generalization capability. Based on Wilcoxon signed-rank test, its classification accuracy is comparable to that of other state-of-the-art approaches and commonly used classifiers. Moreover, since this approach prioritizes case complexity in its predictions and explanations, it offers better explainability and is particularly suited for serious diseases.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access