Abstract

3D shape retrieval has always been a hot research topic in the field of computer vision, and the research goal is to perform fast and efficient retrieval to obtain 3D shapes that meet user needs. With the rapid development and popularization of touch screen devices, hand-drawn sketches have undoubtedly become the most convenient and user-friendly input form. However, the huge difference between the 3D shape and the 2D sketch is the main challenge that affects retrieval performance. In this paper, we propose a method of adding a sketch and view feature similarity comparison module during the training process to obtain the scores for the final feature descriptors under the premise of feature extraction of the 3D shape based on multi-view. Specifically, we render the 3D shape into 2D views from multiple different perspectives to represent the shape. Perform feature extraction on two types of inputs through two different networks, and design a similarity weighting module to calculate the scores of each view, so as to obtain the final descriptors. Finally, a final descriptor similarity metric network is trained based on contrastive loss. The experimental results on SHREC’13 dataset demonstrate the superiority and robustness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call