Abstract

3D model retrieval has been widely utilized in numerous domains, such as computer-aided design, digital entertainment, and virtual reality. Recently, many graph-based methods have been proposed to address this task by using multi-view information of 3D models. However, these methods are always constrained by many-to-many graph matching for the similarity measure between pairwise models. In this article, we propose a multi-view graph matching method (MVGM) for 3D model retrieval. The proposed method can decompose the complicated multi-view graph-based similarity measure into multiple single-view graph-based similarity measures and fusion. First, we present the method for single-view graph generation, and we further propose the novel method for the similarity measure in a single-view graph by leveraging both node-wise context and model-wise context. Then, we propose multi-view fusion with diffusion, which can collaboratively integrate multiple single-view similarities w.r.t. different viewpoints and adaptively learn their weights, to compute the multi-view similarity between pairwise models. In this way, the proposed method can avoid the difficulty in the definition and computation of the traditional high-order graph. Moreover, this method is unsupervised and does not require a large-scale 3D dataset for model learning. We conduct evaluations on four popular and challenging datasets. The extensive experiments demonstrate the superiority and effectiveness of the proposed method compared against the state of the art. In particular, this unsupervised method can achieve competitive performances against the most recent supervised and deep learning method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call