Abstract

Unlike traditional methods that directly map different modalities into an isomorphic subspace for cross-media retrieval, this paper proposes a cross-media retrieval algorithm based on the consistency of collaborative representation (called CR-CMR). In order to measure the similarity between data coming from different modalities, CR-CMR first takes the advantage of dictionary learning techniques to obtain homogeneous collaborative representation for texts and images, then, it considers the semantic consistency of different modalities simultaneously and maps the collaborative representation coefficients into an isomorphic semantic subspace to conduct cross-media retrieval. Experimental results on three state-of-the-art datasets show that the algorithm is effective.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.