Abstract

With the rapid growth of multimedia data, it is very desirable to effectively and efficiently search objects of interest across different modalities from large scale databases. Cross-modal hashing provides a very promising way to address such problem. In this paper, we propose a two-step cross-modal hashing approach to obtain compact hash codes and learn hash functions from multimodal data. Our approach decomposes the cross-modal hashing problem into two steps: generating hash code and learning hash function. In the first step, we obtain the hash codes for all modalities of data via a joint multi-modal graph, which takes into consideration both the intra-modality and inter-modality similarity. In the second step, learning hashing function is formulated as a binary classification problem. We train binary classifiers to predict the hash code for any data object unseen before. Experimental results on two cross-modal datasets show the effectiveness of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call