Abstract

Cross-modal hashing is an efficient method to embed high-dimensional heterogeneous modal feature descriptors into a consistency-preserving Hamming space with low-dimensional. Most existing cross-modal hashing methods have been able to bridge the heterogeneous modality gap, but there are still two challenges resulting in limited retrieval accuracy: (1) ignoring the continuous similarity of samples on manifold; (2) lack of discriminability of hash codes with the same semantics. To cope with these problems, we propose a Deep Consistency-Preserving Hash Auto-encoders model, called DCPHA, based on the multi-manifold property of the feature distribution. Specifically, DCPHA consists of a pair of asymmetric auto-encoders and two semantics-preserving attention branches working in the encoding and decoding stages, respectively. When the number of input medical image modalities is greater than 2, the encoder is a multiple pseudo-Siamese network designed to extract specific modality features of different medical image modalities. In addition, we define the continuous similarity of heterogeneous and homogeneous samples on Riemann manifold from the perspective of multiple sub-manifolds, respectively, and the two constraints, i.e., multi-semantic consistency and multi-manifold similarity-preserving, are embedded in the learning of hash codes to obtain high-quality hash codes with consistency-preserving. The extensive experiments show that the proposed DCPHA has the most stable and state-of-the-art performance. We make code and models publicly available: https://github.com/Socrates023/DCPHA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.