Abstract
Cross-modal hashing has received increasing research attentions due to its less storage and efficient retrieval. However, most existing cross-modal hashing methods focus only on exploring multi-modal information, while underestimate the significance of local and Euclidean structure information on the hashing learning procedure. In this paper, we propose a supervised discrete-based cross-modal hashing method, named Scalable Discriminative Discrete Hashing (SDDH), for cross-modal retrieval, where 1) the discrete hash codes are directly obtained by multi-modal features and semantic labels so that the quantization errors are dramatically reduced, and 2) the discrete hash codes simultaneously preserve the heterogeneous similarity and manifold information in the original space by employing matrix factoring with orthogonal and balanced constraints. Moreover, an efficient optimization is introduced to tackle the discrete solution, which makes the SDDH scalable to large-scale cross-modal retrieval. Empirical results on three widely-used benchmark databases clearly demonstrate the effectiveness and efficiency of the proposed method in comparison with state-of-the-arts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.