Abstract

As one of the mainstream research directions in the field of computer vision, cross-modal hash retrieval has been concerned by researchers. For real world data, unsupervised cross-modal hash retrieval is obviously more important. Aiming at the problem that some current research methods cannot convey the semantic information of high-level representation to hash code, this paper proposes a novel cross-modal hashing method, named Multi-layer Semantic Constraints Hashing for unsupervised cross-modal retrieval (MLSCH), for cross-modal retrieval. The neighbor matrix between intra-modal and inter-modal is used to guide the generation of hash codes, and the neighbor structure is applied to the feature representation of different modes, which reconstructs the cross-modal features containing structural information, and effectively improves the generation quality of hash codes. Extensive experiments show that MLSCH outperforms the current advanced cross-modal hashing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.