Abstract

Background: Applying convolution neural networks for large-scale 3D point clouds semantic segmentation is quiet challenging, due to the unordered characteristics of 3D data and the computation burden of large-scale point clouds. Methods: To solve these problems, we designed DPC-Net (Distributed Point Convolution Network). The input point clouds of DPC-Net are partitioned by the K-nearest neighbor strategy and reordered based on Euclidean distance. For reducing computation and memory consumption while retaining critical features, the random sampling strategy is used and a distributed point convolution operation is designed. Our novel convolution method extracts parallel local geometric information including space distance and angle features, respectively. Furthermore, our proposed method could be easily and efficiently embedded into many networks for point clouds semantic segmentation. Results: Extensive experimental results on the Semantic3D and CSPC (Complex Scene Point Cloud) datasets indicate that the proposed DPC-Net not only obtains state-of-the-art performances but also reduces semantic segmentation time. Conclusions: In general, we present an efficient and lightweight deep convolutional network, DPC-Net, which captures local geometric features and local contextual information to predict point labels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call