Abstract
Multi-modal hashing focuses on fusing different modalities and exploring the complementarity of heterogeneous multi-modal data for compact hash learning. However, existing multi-modal hashing methods still suffer from several problems, including: 1) Almost all existing methods generate unexplainable hash codes. They roughly assume that the contribution of each hash code bit to the retrieval results is the same, ignoring the discriminative information embedded in hash learning and semantic similarity in hash retrieval. Moreover, the length of hash code is empirically set, which will cause bit redundancy and affect retrieval accuracy. 2) Most existing methods exploit shallow models which fail to fully capture higher-level correlation of multi-modal data. 3) Most existing methods adopt online hashing strategy based on immutable direct projection, which generates query codes for new samples without considering the differences of semantic categories. In this paper, we propose a Semantic-driven Interpretable Deep Multi-modal Hashing (SIDMH) method to generate interpretable hash codes driven by semantic categories within a deep hashing architecture, which can solve all these three problems in an integrated model. The main contributions are: 1) A novel deep multi-modal hashing network is developed to progressively extract hidden representations of heterogeneous modality features and deeply exploit the complementarity of multi-modal data. 2) Learning interpretable hash codes, with discriminant information of different categories distinctively embedded into hash codes and their different impacts on hash retrieval intuitively explained. Besides, the code length depends on the number of categories in the dataset, which can reduce the bit redundancy and improve the retrieval accuracy. 3) The semantic-driven online hashing strategy encodes the significant branches and discards the negligible branches of each query sample according to the semantics contained in it, therefore it could capture different semantics in dynamic queries. Finally, we consider both the nearest neighbor similarity and semantic similarity of hash codes. Experiments on several public multimedia retrieval datasets validate the superiority of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.