Abstract

In this paper, we propose a supervised discrete online hashing (SDOH) method for online cross-modal retrieval. Unlike most existing batch-based cross-modal hashing methods that usually accumulate newly arriving data with the previous samples to recompute the hash functions and hash codes, our proposed method can efficiently generate the hash codes of newly arriving training data and retrains the hash functions for query samples only based on the new data. Specifically, we first in parallel calculate the common representation of the multi-modal data with different time stamp by embedding the semantic labels into the common representation, such that the common representation of the old and new training data can be separately calculated and meanwhile heterogeneous gap of multiple modalities can be well reduced. Then, we convert the continuous common representation into a suitable discrete binary space via an orthogonal rotation operation to obtain the hash codes of the new multi-modal training data. Finally, we learn the modality-specific hash functions which can simply convert the query instances into hash codes for cross-modal retrieval. Experimental results on several benchmark databases substantiate the effectiveness and efficiency of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call