Abstract

In the field of eXplainable AI (XAI), robust “blackbox” algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. In view of the above needs, this paper proposes an interaction-based methodology–Influence score (I-score)—to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity. The selected features with high I-score values can be considered as a group of variables with interactive effect, hence the proposed name interaction-based methodology. We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explainability and interpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems. In investigation of Pneumonia Chest X-ray Image data, the proposed method achieves 99.7% Area-Under-Curve (AUC) using less than 20,000 parameters while its peers such as VGG16 and its upgraded versions require at least millions of parameters to achieve on-par performance. Using I-score selected explainable features allows reduction of over 98% of parameters while delivering same or even better prediction results.

Highlights

  • Many successful achievements in machine learning and deep learning have accelerated real-world implementations of Artificial Intelligence (AI)

  • This paper presents an interaction-based feature selection methodology incorporating the notion of influence score, Influence score (I-score), as a major technique to detect the higher-order interactions in complex and large-scale data set

  • From prior simulation experience [21, 22], it is demonstrated that the two basic tools, I-score and Backward Dropping Algorithm, can extract influential variables from data set in regards of modules and interaction effect

Read more

Summary

Introduction

Many successful achievements in machine learning and deep learning have accelerated real-world implementations of Artificial Intelligence (AI). This issue has been greatly acknowledged by the Department of Defense (DoD) [7]. In addressing the concepts of interpretability and explainability, these scholars and researchers have made attempts towards discussing a trade-off between learning performance (usually measured by prediction performance) and effectiveness of explanations ( known as explainability), which is presented in Fig. 1 [18, 19] This trade-off often occurs in any supervised machine learning problems that aim to use explanatory variable to predict response variable

Organization of this paper
Definition of feature explainability
Proposed method
Background of the Pneumonia disease
Biological interpretation of the image data
Pneumonia data set
Transfer learning using VGG16
Feature assessment and predictivity
Explainability and interpretation
Future scope
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.