Abstract

Differential diagnosis of tumors is important for computer-aided diagnosis. In computer-aided diagnosis systems, expert knowledge of lesion segmentation masks is limited as it is only used during preprocessing or as supervision to guide feature extraction. To improve the utilization of lesion segmentation masks, this study proposes a simple and effective multitask learning network that improves medical image classification using self-predicted segmentation as guiding knowledge; we call this network RS 2-net. In RS 2-net, the predicted segmentation probability map obtained from the initial segmentation inference is added to the original image to form a new input, which is then reinput to the network for the final classification inference. We validated the proposed RS 2-net using three datasets: the pNENs-Grade dataset, which tested the prediction of pancreatic neuroendocrine neoplasm grading, and the HCC-MVI dataset, which tested the prediction of microvascular invasion of hepatocellular carcinoma, and ISIC 2017 public skin lesion dataset. The experimental results indicate that the proposed strategy of reusing self-predicted segmentation is effective, and RS 2-net outperforms other popular networks and existing state-of-the-art studies. Interpretive analytics based on feature visualization demonstrates that the improved classification performance of our reuse strategy is due to the semantic information that can be acquired in advance in a shallow network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call