Abstract

Recently, due to the black box characteristics of deep learning techniques, the deep network-based computer-aided diagnosis (CADx) systems have encountered many difficulties in practical applications. The crux of the problem is that these models should be explainable the model should give doctors rationales that can explain the diagnosis. In this paper, we propose a visually interpretable network (VINet) which can generate diagnostic visual interpretations while making accurate diagnoses. VINet is an end-to-end model consisting of an importance estimation network and a classification network. The former produces a diagnostic visual interpretation for each case, and the classifier diagnoses the case. In the classifier, by exploring the information in the diagnostic visual interpretation, the irrelevant information in the feature maps is eliminated by our proposed feature destruction process. This allows the classification network to concentrate on the important features and use them as the primary references for classification. Through a joint optimization of higher classification accuracy and eliminating as many irrelevant features as possible, a precise, fine-grained diagnostic visual interpretation, along with an accurate diagnosis, can be produced by our proposed network simultaneously. Based on a computed tomography image dataset (LUNA16) on pulmonary nodule, extensive experiments have been conducted, demonstrating that the proposed VINet can produce state-of-the-art diagnostic visual interpretations compared with all baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call