Abstract

Given a query that specifies partial 3D shape, a Part-based 3D Model Retrieval (P3DMR) system finds 3D shapes whose part or parts matches the query. An approach to P3DMR is to partition or segment whole models into sub-parts and performs query-part-to-target-parts matching. Whatever the definition of part, e.g., a rectangular volume in Euclidean space or a part segmented on a mesh manifold, the computation will be very costly. The part-whole matching must account for, for each 3D whole shape in a database, varying position, scale and orientation of the segmented sub parts. Another approach, in an attempt to make part-whole matching efficient, tries to approximate part-whole inclusion test with a single comparison between a pair of features, one representing the part-based query and the other representing the whole shape. Aggregation of local geometrical features of parts into a feature per whole 3D shape, e.g., via Bag-of-Features approach, is an example. This approach so far suffered from inaccuracy as the aggregation is not optimized for part-whole inclusion test of 3D shapes. This paper proposes a novel P3DMR algorithm called Part-Whole Relation Embedding network (PWRE-net) that effectively and efficiently performs part-whole inclusion test via learned embedding into a common feature space. Using deep neural network, the PWRE-net learns, from a large number of part-whole shape pairs, a common embedding of partial shapes and their associated whole shapes. For the training, training datasets containing part-whole shape pairs are created automatically from unlabeled 3D models. Experimental evaluation shows that PWRE-net outperforms existing algorithms both in terms of retrieval accuracy and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call