Abstract

Because an increasing number of modality data emerge on the Internet, cross-modal retrieval has become a nontrivial research topic. Furthermore, given the massive amount of cross-modal data and the high dimension of their features, hashing has been explored because it can reduce storage cost and accelerate retrieval speed. In this paper, we put forward a deep cross-modal hashing approach, dubbed semantic deep cross-modal hashing (SDCH). It can make effective use of semantic label information and generate more discriminative hash codes. Specifically, it utilizes the semantic label branches to improve the feature learning part, which can preserve semantic information of the learned features and keep the invariability of cross-modal data. Furthermore, it employs the hash codes learning branches to maintain the consistency of hash codes between different modalities in the Hamming space. Besides, it adopts inter-modal pairwise loss, cross-entropy loss and quantization loss to ensure that the ranking relevance of all similar instance pairs is higher than that of dissimilar ones. Compared to the most advanced method, attention-aware deep adversarial hashing (AADAH), SDCH averagely improves 6.14%, 4.84%, and 3.75% on three widely used datasets, IAPR TC-12, MIR-Flickr 25k, and NUS-WIDE, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.