Abstract

Interpretation of the reasoning process of a prediction made by a deep learning model is always desired. However, when it comes to the predictions of a deep learning model that directly impacts on the lives of people then the interpretation becomes a necessity. In this paper, we introduce a deep learning model: negative-positive prototypical part network (NP-ProtoPNet). This model attempts to imitate human reasoning for image recognition while comparing the parts of a test image with the corresponding parts of the images from known classes. We demonstrate our model on the dataset of chest X-ray images of Covid-19 patients, pneumonia patients and normal people. The accuracy and precision that our model receives is on par with the best performing non-interpretable deep learning models.

Highlights

  • The importance of deep learning algorithms stems from the fact that they are capable of solving many social and economic problems

  • NP-ProtoPNet is closely related to prototypical part network (ProtoPNet) [3] model, but it is strikingly different from ProtoPNet

  • Our model is closely related to ProtoPNet model but strikingly different from the latter

Read more

Summary

Introduction

The importance of deep learning algorithms stems from the fact that they are capable of solving many social and economic problems. Most of the deep learning algorithms work as a black-box, because they lack the transparency of the reasoning process of their predictions. The lack of interpretability/transparency of the reasoning process of such deep learning models has become a key issue for whether we can trust predictions that are coming from these models. We introduce an interpretable deep learning model: negative-positive prototypical part network (NP-ProtoPNet). NP-ProtoPNet is closely related to prototypical part network (ProtoPNet) [3] model, but it is strikingly different from ProtoPNet. NP-ProtoPNet is closely related to prototypical part network (ProtoPNet) [3] model, but it is strikingly different from ProtoPNet Our model considers both positive reasoning and negative reasoning to classify images, whereas ProtoPNet emphasizes only on positive reasoning.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call