Abstract

Hashing-based cross-modal retrieval methods have obtained considerable attention due to their efficient retrieval performance and low storage cost. Recently, supervised methods have demonstrated their excellent retrieval accuracy. However, many methods construct a massive similarity matrix by labels and disregard the discrete constraints imposed on the hash codes, which makes it unscalable and results in undesired performance. To overcome these shortcomings, we propose a novel supervised hashing method, named Fast Discrete Matrix Factorization Hashing (FDMFH), which focuses on correlations preservation and the hash codes learning with the discrete constraints. Specifically, FDMFH utilizes matrix factorization to learn a latent semantic space in which relevant data share the same semantic representation. Then, the discriminative hash codes generated by rotating quantization and linear regression preserve the original locality structure of training data. Moreover, an efficient discrete optimization method is used to learn the unified hash codes with a single step. Extensive experiments on two benchmark datasets, MIRFlickr and NUS-WIDE, verify that FDMFH outperforms several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call