Abstract

The Deep Learning (DL) models stand out as one of the most popular and widely adopted machine learning techniques across various applications, owing to their capability to handle extensive datasets and extract complex features. However, despite its widespread usage, there are concerns regarding the interpretability and explainability of the DL model, especially if the applications are related to healthcare, where trust is paramount.To address this issue, we have proposed a linguistic rule-based explainable artificial intelligence (LR-XAI) model designed to elucidate the decision-making process of DL models in terms of rules. This LR-XAI model employs various local-level interpretation methods to analyze the different features learned during training, and a linguistic rule-based system is subsequently utilized to generate rules based on these interpretation scores. The proposed model aims to enhance both interpretability in terms of feature attributions score and explainability in terms of IF-THEN rules.We evaluated the proposed model on three diverse datasets related to the physics experiments, and custom datasets. Furthermore, we conducted a statistical analysis to rigorously compare the rules generated by different interpretation models. This critical step is paramount for a comprehensive and in-depth analysis of the quality of rules. The results of these evaluations provide valuable insights into the effectiveness and applicability of our proposed model in addressing the interpretability and explainability challenges associated with DL models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call