Abstract
Unsupervised cross-modal hashing has attracted considerable attention to support large-scale cross-modal retrieval. Although promising progresses have been made so far, existing methods still suffer from limited capability on excavating and preserving the intrinsic multi-modal semantics. In this paper, we propose a Correlation-Identity Reconstruction Hashing (CIRH) method to alleviate this challenging problem. We develop a new unsupervised deep cross-modal hash learning framework to model and preserve the heterogeneous multi-modal correlation semantics into both hash codes and functions, and simultaneously, we involve both the hash codes and functions with the descriptive identity semantics. Specifically, we construct a multi-modal collaborated graph to model the heterogeneous multi-modal correlations, and jointly perform the intra-modal and cross-modal semantic aggregation on homogeneous and heterogeneous graph networks to generate a multi-modal complementary representation with correlation reconstruction. Furthermore, an identity semantic reconstruction process is designed to involve the generated representation with identity semantics by reconstructing the input modality representations. Finally, we propose a correlation-identity consistent hash function learning strategy to transfer the modelled multi-modal semantics into the neural networks of modality-specific deep hash functions. Experiments demonstrate the superior performance of the proposed method on both retrieval accuracy and efficiency. We provide our source codes and experimental datasets at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/XizeWu/CIRH</uri> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Knowledge and Data Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.