Abstract

Object retrieval has attracted much research attention in recent years. Confronting object retrieval, how to estimate the relevance among objects is a challenging task. In this paper, we focus on view-based object retrieval and propose a multi-scale object retrieval algorithm via learning on graph from multimodal data. In our work, shape features are extracted from each view of objects. The relevance among objects is formulated in a hypergraph structure, where the distance of different views in the feature space is employed to generate the connection in the hypergraph. To achieve better representation performance, we propose a multi-scale hypergraph structure to model object correlations. The learning on graph is conducted to estimate the optimal relevance among these objects, which are used for object retrieval. To evaluate the performance of the proposed method, we conduct experiments on the National Taiwan University dataset and the ETH dataset. Experimental results and comparisons with the state-of-the-art methods demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.