Abstract

Point clouds are an important type of geometric data obtained from a variety of 3D sensors. They do not have an explicit neighborhood structure and therefore several researchers often perform a voxelization step to obtain structured 3D neighborhood. This, however, comes with certain disadvantages, e.g., it makes the data unnecessarily voluminous, enforces additional computation effort and can potentially introduce quantization errors that may not only hinder in extracting implicit 3D shape information but also in capturing the essential data invariances for the required segmentation and recognition task. In this context, this paper addresses the highly challenging problem of semantic segmentation and 3D object recognition using raw unstructured 3D point cloud data. Specifically, the deep network architecture has been proposed which consists of a cascaded combination of 3D point-based residual networks for simultaneous semantic scene segmentation and object classification. It exploits the 3D point-based convolutions for representational learning from raw unstructured 3D point cloud data. The proposed architecture has a simple design, easier implementation, and the performance which is better than the existing state-of-the architectures particularly for semantic scene segmentation over three public datasets. The implementation and evaluation are made public here https://github.com/saira05/DPRNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call