Abstract

Online cross-modal hashing has received increasing attention due to its efficiency and effectiveness in handling cross-modal streaming data retrieval. Despite the promising performance, these methods mainly focus on the supervised learning paradigm, demanding expensive and laborious work to obtain clean annotated data. Existing unsupervised online hashing methods mostly struggle to construct instructive semantic correlations among data chunks, resulting in the forgetting of accumulated data distribution. To this end, we propose a Dynamic Prototype-based Online Cross-modal Hashing method, called DPOCH. Based on the pre-learned reliable common representations, DPOCH generates prototypes incrementally as sketches of accumulated data and updates them dynamically for adapting streaming data. Thereafter, the prototype-based semantic embedding and similarity graphs are designed to promote stability and generalization of the hashing process, thereby obtaining globally adaptive hash codes and hash functions. Experimental results on benchmarked datasets demonstrate that the proposed DPOCH outperforms state-of-the-art unsupervised online cross-modal hashing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call