Abstract

As deep learning has been successfully deployed in diverse applications, there is an ever increasing need to explain its decision. To explain decisions, case-based reasoning has proved to be effective in many areas. The prototype-based explanation is a method that provides an explanation of the model’s prediction using the distance between an input and learned prototypes to effectively perform case-based reasoning. However, existing methods are less reliable because distance is not always consistent with human perception. In this study, we construct a latent space which we call an <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">explanation space</i> with distributional embedding and latent space regularization. This explanation space ensures that similar (in terms of human-interpretable features) images share similar latent representations, and therefore provides a reliable explanation for the consistency between distance-based explanation and human perception. The explanation space also provides additional explanation by transition, allowing the user to understand the factors that affect the distance. Throughout extensive experiments including human evaluation, we have shown that the explanation space provides a more human-understandable explanation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call