Abstract
• Visual attribution (VA) method is proposed for medical images. • VA is posed as discrepancy maps between an abnormal and normal counter images • Generative adversarial networks are used for abnormal-to-normal image generation. • Experiments are conducted on syntactic and Alzheimer’s disease datasets. • Proposed method outperformed baseline and related methods Visual attribution (VA) in relation to medical images is an essential aspect of modern automation-assisted diagnosis. Since it is generally not straightforward to obtain pixel-level ground-truth labelling of medical images, classification-based interpretation approaches have become the de facto standard for automated diagnosis, in which the ability of classifiers to make categorical predictions based on class-salient regions is harnessed within the learning algorithm. Such regions, however, typically constitute only a small subset of the full range of features of potential medical interest. They may hence not be useful for VA of medical images where capturing all of the disease evidence is a critical requirement. This hence motivates the proposal of a novel strategy for visual attribution that is not reliant on image classification. We instead obtain normal counterparts of abnormal images and find discrepancy maps between the two. To perform the abnormal-to-normal mapping in unsupervised way, we employ a Cycle-Consistency Generative Adversarial Network , thereby formulating visual attribution in terms of a discrepancy map that, when subtracted from the abnormal image, makes it indistinguishable from the counterpart normal image. Experiments are performed on three datasets including a synthetic, Alzheimer’s disease Neuro imaging Initiative and, BraTS dataset. We outperform baseline and related methods in both experiments.
Accepted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.