Abstract

Cross-modal hashing has garnered considerable attention and gained great success in many cross-media similarity search applications due to its prominent computational efficiency and low storage overhead. However, it still remains challenging how to effectively take multilevel advantages of semantics on the entire database to jointly bridge the semantic and heterogeneity gaps across different modalities. In this paper, we propose a novel Modality-Invariant Asymmetric Networks (MIAN) architecture, which explores the asymmetric intra- and inter-modal similarity preservation under a probabilistic modality alignment framework. Specifically, an intra-modal asymmetric network is conceived to capture the query-vs-all internal pairwise similarities for each modality in a probabilistic asymmetric learning manner. Moreover, an inter-modal asymmetric network is deployed to fully harness the cross-modal semantic similarities supported by the maximum inner product search formula between two distinct hash embeddings. Particularly, the pairwise, piecewise and transformed semantics are jointly considered into one unified semantic-preserving hash codes learning scheme. Furthermore, we construct a modality alignment network to distill the redundancy-free visual features and maximize the conditional bottleneck information between different modalities. Such a network could close the heterogeneity and domain shift across different modalities. Extensive experiments evidence that our MIAN approach can outperform the state-of-the-art cross-modal hashing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call