Abstract

Analysis of point clouds through deep convolutional neural networks is an active area of research due to their massive real-world applications including autonomous driving, indoor navigation, robotics, virtual/augmented reality, unmanned aerial vehicles, and drone technology. However, capturing the fine-grained geometric and semantic properties for the underlying recognition task with raw unstructured point cloud is highly challenging due to the lack of explicit neighborhood relationship and sparsity among the points. In this paper, we have introduced a deep, hierarchical 3D point based architecture for object classification and part segmentation that is able to learn robust geometric features which remain invariant to both the geometry and orientation of the local patches. The proposed architecture consists of multiple layers of sampling, concentric annular convolution, pooling, and residual feature propagation blocks. In the skip connections of our deep residual design, we propose to use a combination of linear projection shortcut and nonlinear ReLU group normalization shortcut with batch normalization, to improve both the optimization landscape and the representational power. Our network achieves on par or even better than state-of-the-art results on synthetic and real-world object classification (i.e., ModelNet40 and ScanObjectNN) and part segmentation (i.e., ShapeNet-part) benchmark datasets. The implementation and results have been made available at …https://github.com/Rabbia-Hassan/Deep_Annular_Residual_Feature_Learning_for_3dPointCloudsGitHub-link

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call