Abstract
Cross-modal hashing methods have drawn considerable attention due to the rapid growth of multi-modal data. To obtain efficient binary codes in a low-dimensional Hamming space, most existing approaches relaxed the discrete constraint, which could cause quantization loss and even result in performance degradation. In order to avoid this bottleneck, some scholars employed iterative discrete cyclic coordinate descent (DCC) to learn hash codes bit by bit, but this was very time-consuming. To counter this problem, a simple yet novel supervised discrete cross-modal hashing framework is represented to directly learn the unified discrete binary codes with a close-form, rather than bit by bit. Furthermore, to preserve label separability, the kernel discriminant analysis is fused into the proposed framework to enrich the discrimination ability of the learned binary codes. The goal of the proposed method is to obtain the common discrete binary codes of different modalities in a shared latent Hamming space so that the different modalities of a sample can be effectively connected. Experimental study shows the encouraging results of the proposed algorithm in comparisons to the state-of-the-art baseline approaches on four real-world datasets. Especially on the LabelMe dataset, the superiority of the proposed method is obvious, with an average improvement of 9% over the best available results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.