Abstract

This paper introduces a novel approach for search and retrieval of multimedia content. The proposed framework retrieves multiple media types simultaneously, namely 3D objects, 2D images and audio files, by utilizing an appropriately modified manifold learning algorithm. The latter, which is based on Laplacian Eigenmaps, is able to map the mono-modal low-level descriptors of the different modalities into a new low-dimensional multimodal feature space. In order to accelerate search and retrieval and make the framework suitable even for large-scale applications, a new multimedia indexing scheme is adopted. The retrieval accuracy of the proposed method is further improved through relevance feedback, which enables users to refine their queries by marking the retrieved results as relevant or non-relevant. Experiments performed on a multimodal dataset demonstrate the effectiveness and efficiency of our approach. Finally, the proposed framework can be easily extended to involve as many heterogeneous modalities as possible.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.