Abstract

With the increase in use of machine learning classifiers in several fields, providing human- understandable explanation of their outputs has become an imperative. It is essential to generate trust for day-to-day tasks, especially in the sensible domains as medical imaging. Although many works have addressed this problem by generating visual explanation maps, they often provide noisy and inaccurate results forcing heuristic regularization unrelated to the classifier in question. In this paper, we propose a general perspective of the visual explanation problem overcoming these limitations. We show that visual explanation can be produced as the difference between two generated images obtained via two specific conditional generative models. Both generative models are trained using the classifier to explain and a database to enforce the following properties: (i) All images generated by the first generator are classified similarly to the input image, whereas the second generator’s outputs are classified oppositely. (ii) All generated images belong to the distribution of real images. (iii) The distances between the input image and the corresponding generated images are minimal so that the difference between the generated elements only reveals relevant information for the studied classifier. Using symmetrical and cyclic constraints, we present two different approximations and implementations of the general formulation. Experimentally, we demonstrate significant improvements with respect to the state-of-the-art on three different public data sets. In particular, the localization of regions influencing the classifier is consistent with human annotations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.