Abstract

With the popularity of social networks, people can easily generate rich content with multiple modalities. How to effectively and simply estimate the similarity of multi-modal content is becoming more and more important for providing better information searching service of rich media. This work attempts to enhance the similarity estimation so as to improve the accuracy of multi-modal data searching. Toward this end, a novel multi-modal feature extraction approach, which involves the neighborhood reversibility verifying of information objects with different modalities, is proposed to build reliable similarity estimation among multimedia documents. By verifying the neighborhood reversibility in both single- and multi-modal instances, the reliability of multi-modal subspace can be remarkably improved. In addition, a new adaptive strategy, which fully employs the distance distribution of returned searching instances, is proposed to handle the neighbor selection problem. To further address the out-of-sample problem, a new prediction scheme is proposed to predict the multi-modal features for new coming instances, which is essentially to construct an over-complete set of bases. Extensive experiments demonstrate that introducing the neighborhood reversibility verifying can significantly improve the searching accuracy of multi-modal documents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.