Abstract

Cross-media retrieval aims to integrate and analyze the features of various modalities (e.g., text, image and video) to mine their potential semantic information. In this paper, we propose a novel cross-media retrieval framework, which performs coupled feature mapping and correlation mining successively. Our method first learns two projection matrices to map the multimodal features into a common category space, in which homo- and hetero-correlation techniques can be applied easily. Homo-correlation focuses on the semantic category information within the same media type, while hetero-correlation focuses on the semantic category information be-tween different media types. The two could complement and reinforce each other. Experiments on two different datasets, Wikipedia dataset and Pascal Voc dataset, demonstrate that the proposed framework gives promising results compared to the related state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.