Abstract

Cross-modal hashing retrieval methods have attracted much attention for their effectiveness and efficiency. However, most of the existing hashing methods have the problem of how to precisely learn potential correlations between different modalities from binary codes with minimal loss. In addition, solving binary codes in different modalities is an NP-hard problem. To overcome these challenges, we initially propose a novel adaptive fast cross-modal hashing retrieval method under the inspiration of DBSCAN clustering algorithm, named Cross-modal Hashing Retrieval Based on Density Clustering(DCCH). DCCH utilizes the global density correlation between different modalities to select representative instances to replace the entire data precisely. Furthermore, DCCH excludes the adverse effects of noise points and leverages the discrete optimization process to obtain hash functions. The extensive experiments show that DCCH is superior to other state-of-the-art cross-modal methods on three benchmark bimodal datasets, i.e., Wiki, MIRFlickr and NUS-WIDE. Therefore, the experimental results also prove that our method DCCH is comparatively usable and efficient.

Highlights

  • As the demand of the tremendous information used in computer vision, information retrieval, image processing and related area [1]– [5], cross-modal retrieval has become increasingly widely studied with the public these years

  • To efficiently address the challenges mentioned above, we propose a novel method of cross-modal supervised hashing algorithm called Cross-modal Hashing Retrieval Based on Density Clustering (DCCH)

  • To address the above problems, instead of the original random points selection process, we design an adaptive fast cross-modal hashing retrieval method under the inspiration of density clustering algorithm, called Cross-modal Hashing Retrieval Based on Density Clustering(DCCH)

Read more

Summary

Introduction

As the demand of the tremendous information used in computer vision, information retrieval, image processing and related area [1]– [5], cross-modal retrieval has become increasingly widely studied with the public these years. Because different modalities of data always in heterogeneous feature spaces, cross-modal retrieval methods can represent other modal of data rather than unimodal methods, which is further more significant in recent researches [6]– [14]. The most typical method called Canonical Correlation Analysis (CCA) [16] mapped the raw feature data to a common latent subspace by sorting the correlation of different modalities data, and new instances were directly queried via similarities in common subspace. Even if these methods have already made great advanced, there still have some unsolved problems such as high computational cost and scalability issues

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call