Abstract

Content-based 3D object retrieval has wide applications in various domains, ranging from virtual reality to computer aided design and entertainment. With the rapid development of digitizing technologies, different views of 3D objects are captured, which requires for effective and efficient view-based 3D object retrieval (V3DOR) techniques. As each object is represented by a set of multiple views, V3DOR becomes a group matching problem. Most of state-of-the-art V3DOR methods use one single feature to describe a 3D object, which is often insufficient. In this paper, we propose a feature fusion method via multi-modal graph learning for view-based 3D object retrieval. Firstly, different visual features, including 2D Zernike moments, 2D Fourier descriptor and 2D Krawtchouk moments, are extracted to describe each view of a 3D object. Then the Hausdorff distance is computed to measure the similarity between two 3D objects with multiple views. Finally we construct multiple graphs based on different features and learn the optimized weights of each graph automatically for feature fusion task. Extensive experiments are conducted on the ETH-80 dataset and the National Taiwan University 3D model dataset. The results demonstrate the superior performance of the proposed method, as compared to the state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call