Abstract

Fundus image retrieval can help ophthalmologists make evidence-based medico-decision by providing similar cases. Its basic task is to learn highly discriminative visual descriptors from image space, in which lesion features are the main differentiating clue. Lesions in fundus images appear small in size, similar in textures, and scatter around vessels, such as microaneurysms and hemorrhages. Hence, although a single small lesion has a saliently visual manifestation, its discriminative information is hard to reserve in the last image descriptors. For fundus images, the optic disc of the left and right eyes are symmetric, and the macular area lies in the central axis from the vertical view. Based on such spatial structure and lesion characteristics, we present a novel deep metric learning framework equipped with mirror attention to enhance the discriminative features of small and scattering lesions and encode them into image descriptors. The mirror attention can give lesions high attention scores by capturing spatial dependency of vertical and horizontal views, especially the relations between lesions and vessels. Based on the mirror attention, we further propose a new fine triplet loss to confine distances of positive pairs by exploiting the learned relevant degrees of positive pairs in a self-supervised manner. The fine triplet loss can help detect the subtle differences of positive pairs to improve the ranking performance of hit items. To demonstrate the effectiveness of improving retrieval performance, we conduct comprehensive experiments on the largest fundus dataset of diabetic retinopathy (DR) detection and achieve the best precision compared to counterparts. The experiments show that our method produces significant performance improvements for fundus image retrieval, especially the ranking quality of DR grades containing microaneurysms and hemorrhages. Our proposed mirror attention can be applied to off-the-shelf backbones and trained efficiently in an end-to-end manner for other medical images to obtain highly discriminative image descriptors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call