Abstract

Most existing cross-modal retrieval methods ignore the discriminative semantics embedded in multi-modal data and the unique characteristics of different sub-retrieval tasks. To address the problem, we propose a novel approach in this paper, which is named Joint Feature selection and Graph regularization for Modality-dependent cross-modal retrieval (JFGM). The key idea of JFGM is learning modality-dependent subspaces for different sub-retrieval tasks while simultaneously preserving the semantic consistency of multi-modal data. Specifically, besides to the shared subspace learning between different modalities, a linear regression term is introduced to further correlate the discovered modality-dependent subspace with the explicit semantic space. Furthermore, a multi-model graph regularization term is formulated to preserve the inter-modality and intra-modality semantic consistency. In order to avoid over-fitting problems and select the discriminative features, l2,1-norm is imposed on the projection matrices. Experimental results on several publicly available datasets demonstrate the superiority of the proposed method compared with several state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call