Abstract

In recent years, convolutional neural network (CNN) architecture has achieved good results in the fields of 2D image recognition, detection, and semantic segmentation. However, given the complexity and irregularity of 3D shape structures, CNNs cannot be directly applied to 3D data. With the advantage of the deep learning framework in the field of 2D image analysis, the view-based method can be used for 3D shape classification. However, the existing multi-view based 3D shape classification methods mostly adopt fixed viewpoints. Considerable information redundancy exist in the rendered images, and it can cause certain interference to the results. Herein, we propose a novel multi-view CNN framework, which automatically discriminates the contribution of each viewpoint during the network training and discards the redundant information. In addition, the optimal viewpoint selection method based on viewpoint entropy is introduced into the field of 3D shape classification. In comparison with the fixed viewpoint method, this procedure can retain more detailed information of the shapes and requires no orientation alignment of the model. Experiments on the ModelNet10 and ModelNet40 datasets verify the rationality and superiority of applying the optimal viewpoint selection method based on the viewpoint entropy to 3D model classification and the multi-view information fusion method proposed herein. The experimental results show the better classification accuracy of this method than that of existing 3D model classification methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.