Abstract
Cross-media retrieval has become a key problem in both research and application, in which users can search results across all of the media types (text, image, audio, video, and 3-D) by submitting a query of any media type. How to measure the content similarity among different media is the key challenge. Existing cross-media retrieval methods usually focus on modeling the pairwise correlation or semantic information separately. In fact, these two kinds of information are complementary to each other and optimizing them simultaneously can further improve the accuracy. In this paper, we propose a novel feature learning algorithm for cross-media data, called joint representation learning (JRL), which is able to explore jointly the correlation and semantic information in a unified optimization framework. JRL integrates the sparse and semisupervised regularization for different media types into one unified optimization problem, while existing feature learning methods generally focus on a single media type. On one hand, JRL learns sparse projection matrix for different media simultaneously, so different media can align with each other, which is robust to the noise. On the other hand, both the labeled data and unlabeled data of different media types are explored. Unlabeled examples of different media types increase the diversity of training data and boost the performance of joint representation learning. Furthermore, JRL can not only reduce the dimension of the original features, but also incorporate the cross-media correlation into the final representation, which further improves the performance of both cross-media retrieval and single-media retrieval. Experiments on two datasets with up to five media types show the effectiveness of our proposed approach, as compared with the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.