Abstract

Hashing methods have gained widespread attention in cross-modal retrieval applications due to their efficiency and effectiveness. Many works have been done but they fail to capture the feature based similarity consistency or the discriminative semantics of label consistency. In addition, most of them suffer from large quantization loss, resulting in low retrieval performance. To address these issues, we propose a novel cross-modal hashing method named Label Consistent Locally Linear Embedding based Cross-modal Hashing (LCLCH). LCLCH preserves the non-linear manifold structure of different modality data by Locally Linear Embedding, and transforms heterogeneous data into a latent common semantic space to reduce the semantic gap and support cross-modal retrieval tasks. Therefore, it not only discovers the potential correlation of heterogeneous cross-modal data but also maintains label consistency. To further ensure the effectiveness of hash code learning, we utilize an iterative quantization method to handle the discrete optimization task and obtain the hash codes directly. We compare LCLCH with some advanced supervised and unsupervised methods on three datasets to evaluate its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call