Abstract

Deep learning techniques for machine learning usually lead to increasingly complex models while enhancing performance, which convert these systems into “black box” approaches and create confusion about how they function and, ultimately, how they make decisions. Layerwise Relevance Propagation, an Explainable AI (XAI) technique, excels at explaining any neural network’s output in the context of its input by calculating relevance iteratively from the output class neuron to the original input neurons. This paper outlines recent advancements in the area and advocates for more interpretability in AI. In this paper, we study in-depth Layerwise Relevance Propagation (LRP) and about its four different LRP approaches, which we have experimented on different dataset. With the aim of explaining predictions of deep learning models in the field of image classification in this paper we have tried to basically understand how this different LRP approach helps in describing which pixels in an image are important for deciding on a classification decision by computing a relevance score on its given input image. Furthermore, in this paper, we did human eye evaluation by conducting a user questionnaire survey on XAI in order to compare the quality of heatmaps generated by different LRP methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call