Abstract

Due to the advantages of low storage cost and high retrieval efficiency, cross-modal hashing has received considerate attention. Most existing deep cross-modal hashing adopt a symmetric strategy to learn same deep hash functions for both query instances and database instances. However, the training of these symmetric deep cross-modal hashing methods is time-consuming, which makes them hard to effectively utilize the supervised information for cases with large-scale datasets. Inspired by the latest advance in the asymmetric hashing scheme, in this paper, we propose a discriminative deep asymmetric supervised hashing (DDASH) for cross-modal retrieval. Specifically, asymmetric hashing only learns hash codes of query instances by deep hash functions while learning the hash codes of the database instances by hand-crafted matrices. It cannot only make full use of the information in large-scale datasets, but also reduce the training time. Besides, we introduce discrete optimization to reduce the binary quantization error. Furthermore, a mapping matrix which maps generated hash codes into the corresponding labels is introduced to ensure that the hash codes are discriminative. We also calculate the level of similarity between instances as supervised information. Experiments on three common datasets for cross-modal retrieval show that DDASH outperforms state-of-the-art cross-modal hashing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call