Abstract

Due to the huge difference in the representation of sketches and 3D models, sketch-based 3D model retrieval is a challenging problem in the areas of graphics and computer vision. Some state-of-the-art approaches usually extract features from 2D sketches and produce multiple projection views of 3D models, and then select one view of 3D models to match sketch. It's hard to find “the best view” and views from different perspectives of a 3D model may be completely different. Other methods apply learning features to retrieve 3D models based on 2D sketch. However, sketches are abstract images and are usually drawn subjectively. It is difficult to be learned accurately. To address these problems, we propose cross-domain correspondence method for sketch-based 3D model retrieval based on manifold ranking. Specifically, we first extract learning features of sketches and 3D models by two-parts CNN structures. Subsequently, we generate cross-domain undirected graphs using learning features and semantic labels to create correspondence between sketches and 3D models. Finally, the retrieval results are computed by manifold ranking. Experimental results on SHREC 13 and SHREC 14 datasets show the superior performance in all 7 standard metrics, compared to the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.