Abstract

Three-dimensional (3D) shape recognition is a popular topic and has potential application value in the field of computer vision. With the recent proliferation of deep learning, various deep learning models have achieved state-of-the-art performance. Among them, multiview-based 3D shape representation has received increased attention in recent years, and related approaches have shown significant improvement in 3D shape recognition. However, these methods focus on feature learning based on the design of the network and ignore the correlation among views. In this article, we propose a novel progressive feature guide learning network (PGNet) that focuses on the correlation among multiple views and integrates multiple modalities for 3D shape recognition. In particular, we propose two information fusion schemes from visual and feature aspects. The visual fusion scheme focuses on the view level and employs the soft-attention model to define the weights of views for visual information fusion. The feature fusion scheme focuses on the feature dimension information and employs the quantified feature as the mask to further optimize the feature. These two schemes jointly construct a PGNet for 3D shape representation. The classic ModelNet40 and ShapeNetCore55 datasets are applied to demonstrate the performance of our approach. The corresponding experiment also demonstrates the superiority of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.