Abstract

Existing sparse-to-dense methods for point cloud completion generally focus on designing refinement and expansion modules to expand the point cloud from sparse to dense. This ignores to preserve a well-performed generation process for the points at the sparse level, which leads to the loss of shape priors to the dense point cloud. To resolve this challenge, we introduce Transformer to both feature extraction and point generation processes, and propose a Context-based Point Generation Network (CPGNet) with Point Context Extraction (PCE) and Context-based Point Transformation (CPT) to control the point generation process at the sparse level. Our CPGNet can infer the missing point clouds at the sparse level via PCE and CPT blocks, which provide the well-arranged center points for generating the dense point clouds. The PCE block can extract both local and global context features of the observed points. Multiple PCE blocks in the encoder hierarchically offer geometric constraints and priors for the point completion. The CPT block can fully exploit geometric contexts existing in the observed point clouds, and then transform them into context features of the missing points. Multiple CPT blocks in the decoder progressively refine the context features, and finally generate the center points for the missing shapes. Quantitative and visual comparisons on PCN and ShapeNet-55 datasets demonstrate our model outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call