Abstract

Due to the enormous advantages of high efficiency and low storage, hashing has already gained wide attention in the field of cross-modal search. Currently, most hashing methods learn unified hash codes for retrieval tasks by leveraging the inner characteristics of original data or the similarity graphs constructed by supervised information. However, both the two schemes neglect the discrepancy between distinct modalities in real-world scenarios that leads to poor search performance as well as limited search scalability. Toward this end, in this paper, we propose a novel supervised hashing method, termed Maximized Shared Latent Factor (MSLF) wherein the common property and unique properties of different modalities are taken into account simultaneously. Specifically, the common property (i.e., shared latent factor) generated by available label supervision without the similarity graph construction, can reduce the computation cost of hashing methods and thus make it suitable for cross-modal search applications. Meanwhile, the unique property (i.e., individual latent factor) obtained by each modality and the cross-correlation matching from data instances are designed to alleviate the structure difference between modalities. Furthermore, we combine these properties to learn a maximized shared latent factor for obtaining the hash codes, which enhances the search performance of MSLF. Extensive quantitative experiments on three popular datasets prove that the proposed MSLF outperforms the state-of-the-art hashing methods in both search performance and scalability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call