Abstract

Cross-modal hashing has attracted extensive attention due to the small data storage space and favorable retrieval efficiency. Matrix factorization-based method is an important kind of cross-modal hashing method. Most of existing matrix factorization hashing methods map heterogeneous cross-modal data into a low-dimensional common Hamming space, and then adopt a relaxation and quantification strategy to obtain an approximate hash-coded solution. However, there exist uncontrollable quantization error in this process, which may affect the retrieval performance. In this paper, we propose a Supervised Discrete Matrix Factorization Hashing (SDMFH) approach for cross-modal retrieval, which learns modality-specific latent semantic spaces for each modality based on matrix factorization. The semantic spaces are required to well reconstruct the similarity affinity matrix, such that the label consistency across modalities are fully considered. Furthermore, the binary hash code is directly learned by using the discrete cyclic coordinate descent algorithm, such that the quantization error is effectively reduced. Experiments on the widely used Wiki and NUS-WIDE datasets demonstrate that the proposed SDMFH approach can outperform state-of-the-art related methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call