Abstract

Due to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era. Due to the application of deep learning, the cross-modal representation capabilities have risen markedly. However, the existing deep hashing methods cannot consider multi-label semantic learning and cross-modal similarity learning simultaneously. That means potential semantic correlations among multimedia data are not fully excavated from multi-category labels, which also affects the original similarity preserving of cross-modal hash codes. To this end, this paper proposes deep multi-semantic fusion-based cross-modal hashing (DMSFH), which uses two deep neural networks to extract cross-modal features, and uses a multi-label semantic fusion method to improve cross-modal consistent semantic discrimination learning. Moreover, a graph regularization method is combined with inter-modal and intra-modal pairwise loss to preserve the nearest neighbor relationship between data in Hamming subspace. Thus, DMSFH not only retains semantic similarity between multi-modal data, but integrates multi-label information into modal learning as well. Extensive experimental results on two commonly used benchmark datasets show that our DMSFH is competitive with the state-of-the-art methods.

Highlights

  • In recent years, with the rapid development of information technology, massive amounts of multi-modal datahave been collected and stored on the Internet

  • We propose a novel deep learning-based cross-modal hashing method, termed deep multi-semantic fusion-based cross-modal hashing (DMSFH), which integrates cross-modal feature learning, multi-label semantic fusion, and hash code learning into an end-to-end architecture

  • We proposed an effective hashing approach dubbed deep multi-semantic fusion-based cross-modal hashing (DMSFH) to improve semantic discriminative feature learning and similarity preserving of hash codes in common Hamming subspace

Read more

Summary

Introduction

With the rapid development of information technology, massive amounts of multi-modal data (i.e., text [1], image [2], audio [3], video [4], and 3D models [5])have been collected and stored on the Internet. Most of existing cross-modal retrieval methods, including traditional statistical correlation analysis [15], graph regularization [16], and dictionary learning [17], learn a common subspace [18,19,20,21] for multi-modal samples, in which the semantic similarity between different modalities can be measured . Based on canonical correlation analysis (CCA) [22], several cross-modal retrieval methods [23,24,25] have been proposed to learn a common subspace in which the correlations between different modalities are measured. The methods in [17,29,30] draw support from dictionary learning to learn consistent representations for multi-modal data. In order to overcome these shortcomings, hashing-based cross-modal retrieval techniques are gradually replacing the traditional ones

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.