Abstract

To alleviate the reliance on expensive annotations, contrastive learning techniques have been applied to diagnose diseases from various types of medical images. However, popular contrastive learning methods, which generate positive pairs by random cropping, face challenges in diagnosing retinal diseases, due to the retinal disease’s characteristic that retinal lesions are tiny and randomly distributed in abnormal fundus images. These lesions may be missed after random cropping, resulting in semantically inconsistent positive pairs that hinder the effectiveness of contrastive learning for retinal disease diagnosis. To address this issue, we propose a novel unsupervised gradient-weighted class activation mapping (Grad-CAM) strategy to roughly locate lesions, thereby suppressing or even eliminating semantically inconsistent positive pairs. Specifically, we develop a gradient-weighted Class Activation Map guided Contrastive Learning (CAMCL) method with two branches for the Grad-CAM based instance discrimination task and the k-nearest neighbors (KNN)-based cluster-wise discrimination task, respectively. By minimizing the KNN loss, the cluster-wise discrimination branch learns high-level representations containing class semantic information. This is then followed by gradient back-propagation to generate Grad-CAM heatmaps from unlabeled data. The generated heatmaps can highlight class-discriminative regions in abnormal fundus images (e.g., retinal lesions) to identify semantically consistent positive pairs while suppressing inconsistent ones. The semantically consistent positive pairs are then input to the instance discrimination task for contrastive learning. In this manner, the semantic inconsistency problem is relieved, and the improved contrastive learning pipeline can be effectively used for retinal disease diagnosis. Experimental results on five retinal disease classification datasets show that our model surpasses other contrastive learning methods, indicating a promising approach for clinical application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.