Abstract

Cross-modal retrieval has become a hot issue in past years. Many existing works pay attentions on correlation learning to generate a common subspace for cross-modal correlation measurement, and others use adversarial learning technique to abate the heterogeneity of multimodal data. However, very few works combine correlation learning and adversarial learning to bridge the intermodal semantic gap and diminish cross-modal heterogeneity. This article proposes a novel cross-modal retrieval method, named Adversarial Learning based Semantic COrrelation Representation (ALSCOR), which is an end-to-end framework to integrate cross-modal representation learning, correlation learning, and adversarial. Canonical correlation analysis model, combined with VisNet and TxtNet, is proposed to capture cross-modal nonlinear correlation. Besides, intramodal classifier and modality classifier are used to learn intramodal discrimination and minimize the intermodal heterogeneity. Comprehensive experiments are conducted on three benchmark datasets. The results demonstrate that the proposed ALSCOR has better performance than the state of the arts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call