Abstract
Content-based 3D object retrieval has wide applications in various domains, ranging from virtual reality to computer aided design and entertainment. With the rapid development of digitizing technologies, different views of 3D objects are captured, which requires for effective and efficient view-based 3D object retrieval (V3DOR) techniques. As each object is represented by a set of multiple views, V3DOR becomes a group matching problem. Most of state-of-the-art V3DOR methods use one single feature to describe a 3D object, which is often insufficient. In this paper, we propose a feature fusion method via multi-modal graph learning for view-based 3D object retrieval. Firstly, different visual features, including 2D Zernike moments, 2D Fourier descriptor and 2D Krawtchouk moments, are extracted to describe each view of a 3D object. Then the Hausdorff distance is computed to measure the similarity between two 3D objects with multiple views. Finally we construct multiple graphs based on different features and learn the optimized weights of each graph automatically for feature fusion task. Extensive experiments are conducted on the ETH-80 dataset and the National Taiwan University 3D model dataset. The results demonstrate the superior performance of the proposed method, as compared to the state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.