Abstract

The field of explainable artificial intelligence (XAI) aims to build explainable and interpretable machine learning (or deep learning) methods without sacrificing prediction performance. Convolutional neural networks (CNNs) have been successful in making predictions, especially in image classification. These popular and well-documented successes use extremely deep CNNs such as VGG16, DenseNet121, and Xception. However, these well-known deep learning models use tens of millions of parameters based on a large number of pretrained filters that have been repurposed from previous data sets. Among these identified filters, a large portion contain no information yet remain as input features. Thus far, there is no effective method to omit these noisy features from a data set, and their existence negatively impacts prediction performance. In this paper, a novel interaction-based convolutional neural network (ICNN) is introduced that does not make assumptions about the relevance of local information. Instead, a model-free influence score (I-score) is proposed to directly extract the influential information from images to form important variable modules. This innovative technique replaces all pretrained filters found by trial-and-error with explainable, influential, and predictive variable sets (modules) determined by the I-score. In other words, future researchers need not rely on pretrained filters; the suggested algorithm identifies only the variables or pixels with high I-score values that are extremely predictive and important. The proposed method and algorithm were tested on real-world data set and a state-of-the-art prediction performance of 99.8% was achieved without sacrificing the explanatory power of the model. This proposed design can efficiently screen patients infected by COVID-19 before human diagnosis and can be a benchmark for addressing future XAI problems in large-scale data sets.

Highlights

  • Instead of using a set of predefined filters that are applicable to various kinds of deep Convolutional neural networks (CNNs) on image data by trial and error, we propose using the influence score (I-score) and the backward dropping algorithm on a local window to extract important and influential features from image data

  • We identified that the proposed interaction-based convolutional neural network (ICNN) relies on the I-score, which is a measure that satisfies the three criteria of an explainable and interpretable measure

  • This paper focuses on using the area under the curve (AUC) as the main evaluation metric

Read more

Summary

Introduction

As of 25 April 2021, there have been 146,054,107 confirmed cases of COVID-19, including 3,092,410 deaths, reported to the World Health. Prominent signs of viral pneumonia include decreased oxygen saturation and blood gas deviations, as well as changes visible in the lungs through chest X-rays and other imaging techniques [1]. The COVID-19 pandemic has been the top concern and roadblock since 2019 in more than 150 countries around the world. This disease has had extreme impacts on the health and lives of many on a global scale. In the fight to overcome COVID-19, it is essential to quickly detect the infected patients.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call