Abstract

The extraction of informative features from medical images and the retrieving of similar images from data repositories is vital for clinical decision support systems. Unlike general tasks such as medical image classification and segmentation, retrieval is more reliable in terms of interpretability. However, this task is quite challenging due to the multimodal and imbalanced nature of medical images. Because traditional retrieval methods use hand-crafted feature extraction guided approximate hashing functions, they often have problems capturing the latent characteristics of images. Deep learning based retrieval methods can eliminate drawbacks of hand-crafted feature extraction methods. However, in order for a deep architecture to produce high performance, large-scale datasets containing labeled and balanced samples are required. Since most medical datasets do not have these properties, existing hashing methods are not powerful enough to model patterns in medical images, which have a similar general appearance but subtle differences. In this study, a novel W-shaped contrastive loss (W-SCL) is proposed for skin lesion image retrieval on a dataset whose visual difference between classes is relatively low. We considerably improve the traditional contrastive loss (CL) performance by including label information for very similar skin lesion images. We use two benchmark datasets consisting of general images and two benchmark skin lesion datasets to test the proposed W-SCL performance. In addition, experiments are carried out using various pre-trained CNN and shallow CNN architectures. These extensive experiments reveal that the proposed method improves the mean average precision (mAP) performance by approximately 7% for general image datasets and approximately 12% for skin lesion datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call