Abstract

The ability to accurately locate all indicators of disease within medical images is vital for comprehending the effects of the disease, as well as for weakly-supervised segmentation and localization of the diagnostic correlators of disease. Existing methods either use classifiers to make predictions based on class-salient regions or else use adversarial learning based image-to-image translation to capture such disease effects. However, the former does not capture all relevant features for visual attribution (VA) and are prone to data biases; the latter can generate adversarial (misleading) and inefficient solutions when dealing in pixel values. To address this issue, we propose a novel approach Visual Attribution using Adversarial Latent Transformations (VA2LT). Our method uses adversarial learning to generate counterfactual (CF) normal images from abnormal images by finding and modifying discrepancies in the latent space. We use cycle consistency between the query and CF latent representations to guide our training. We evaluate our method on three datasets including a synthetic dataset, the Alzheimer’s Disease Neuroimaging Initiative dataset, and the BraTS dataset. Our method outperforms baseline and related methods on all datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call