Abstract

Recently, hashing-based multimodal learning systems have received increasing attention due to their query efficiency and parsimonious storage costs. However, impeded by the quantization loss caused by numerical optimization, the existing cross-media hashing approaches are unable to capture all the discriminative information present in the original multimodal data. Besides, most cross-modal methods belong to the one-step paradigm, which learn the binary codes and hash function simultaneously, increasing the complexity of optimization. To address these issues, we propose a novel two-stage approach, named the two-stage supervised discrete hashing (TSDH) method. In particular, in the first phase, TSDH generates a latent representation for each modality. These representations are then mapped to a common Hamming space to generate the binary codes. In addition, TSDH directly endows the hash codes with the semantic labels, enhancing the discriminatory power of the learned binary codes. A discrete hash optimization approach is developed to learn the binary codes without relaxation, avoiding the large quantization loss. The proposed hash function learning scheme reuses the semantic information contained by the embeddings, endowing the hash functions with enhanced discriminability. Extensive experiments on several databases demonstrate the effectiveness of the developed TSDH, outperforming several recent competitive cross-media algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.