Abstract

Due to the fast query speed and low storage cost, multimodal hashing methods have been attracting increasing attention in large-scale cross-media retrieval tasks. Most existing multimodal hashing methods can only handle fully-paired settings, where all data samples with different modalities are well paired. However, in practical applications, such fully-paired multimodal data may not be available. To this end, semi-paired multimodal hashing methods have been proposed by exploiting correlations between unpaired samples. Nevertheless, currently existing semi-paired hashing methods are unsupervised methods. When little supervised information is available, these methods cannot utilize supervised information to enhance the retrieval performance. To effectively utilize the limited supervised information, this paper proposed a novel hashing framework, named semi-paired and semi-supervised multimodal hashing (SSMH), to deal with the scenario where partial pairwise correspondences and labels are provided in advance for cross-media retrieval task. The proposed SSMH propagates the semantic labels from labeled multimodal samples to unlabeled multimodal samples, so that the label information of the entire multimodal training set is available. Then, most existing similarity graph based supervised multimodal hashing methods can be used to learn hashing codes. Therefore, the proposed framework can fully utilize the limited label information and pairwise correspondences to keep the semantic similarity for hashing codes. Thorough experiments on standard datasets show the superior performance of the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call