Abstract

The heterogeneity of multimodal data is the main challenge in cross-media retrieval; many methods have already been developed to address the problem. At present, subspace learning is one of the mainstream approaches for cross-media retrieval; its aim is to learn a latent shared subspace so that similarities within cross-modal data can be measured in this subspace. However, most existing subspace learning algorithms only focus on supervised information, using labeled data for training to obtain one pair of mapping matrices. In this paper, we propose joint graph regularization based on semi-supervised learning cross-media retrieval (JGRHS), which makes full use of labeled and unlabeled data. We jointly considered correlation analysis and semantic information when learning projection matrices to maintain the closeness of pairwise data and semantic consistency; graph regularization is used to make learned transformation consistent with similarity constraints in both modalities. In addition, the retrieval results on three datasets indicate that the proposed method achieves good efficiency in theoretical research and practical applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.