Abstract

With the rapid growth of the Internet, a large number of multi-modal objects such as images and their social tags can easily be downloaded from the Web. The use of such objects can improve training process in the presence of few or limited number of labeled images provided. In order to leverage these unlabeled and labeled multi-modal Web objects for enhancing the performance of unimodal image retrieval, we propose a novel approach called Semi-supervised Multi-concept Retrieval to semantic image retrieval via Deep Learning (SMRDL) in this paper. Differing from conventional methods that use multiple and independent concepts in a semantic multi-concept query, our proposed approach regards multiple concepts as a holistic scene for multi-concept scene learning of unimodal retrieval. In particular, we first train a multi-modal Convolutional Neural Network (CNN) as a concept classifier for images and texts, and then use it to annotate unlabeled Web images. For each of unlabeled images, we then obtain its most relevant concept annotations by using a new strategy of annotation promotion. Finally, we employ a unimodal visual CNN to train a concept classifier in visual modality, which uses both unlabeled and labeled examples for concept learning of unimodal retrieval. The results of our comprehensive experiments on two datasets of MIR Flickr 2011 and NUS-WIDE have shown that our proposed approach outperforms several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call