Abstract

Point cloud completion aims at predicting a complete 3D shape from an incomplete input. It has important applications in the fields of intelligent manufacturing, augmented reality, virtual reality, self-driving cars, and intelligent robotics. Although deep learning-based point cloud completion technology has developed rapidly in recent years, there are still unsolved problems. Previous approaches predict each point independently and ignore contextual information. And, they usually predict a complete 3D shape based on a global feature vector extracted from an incomplete input, which leads to missing of some fine-grained details. In this paper, motivated by the transposed convolution and the “UNet” structure in neural networks for image processing, we propose a context-aware deep network termed as PCUNet for coarse-to-fine point cloud completion. It adopts an encoder-decoder structure, in which the encoder follows the design of the relation-shape convolutional neural network (RS-CNN), and the decoder consists of fully-connected layers and two stacked decoder modules for predicting complete point clouds. The contributions are twofold. First, we design the decoder module as a coordinate-guided context-aware upsampling module, in which contextual information can be taken into full account by neighbor aggregation. Second, to preserve fine-grained details in the input, we propose attention-enhanced skip connections for effective information propagation from the encoder to the decoder. Experiments are conducted on the widely used PCN and KITTI datasets. The results show that our proposed approach achieves competitive performance compared to the existing state-of-the-art approaches in terms of the Chamfer distance and the computational complexity metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call