Abstract

Point cloud completion aims at predicting dense complete 3D shapes from sparse incomplete point clouds captured from 3D sensors or scanners. It plays an essential role in various applications such as autonomous driving, 3D reconstruction, augmented reality, and robot navigation. Existing point cloud completion methods follow the encoder-decoder paradigm, in which the complete point clouds are recovered in a coarse-to-fine strategy. However, only using the global feature is difficult and will lead to blurring of the global structure and distortion of local details. To address this problem, we propose a novel Partial-to-Partial Point Generation Network ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{P}^{2}$</tex-math></inline-formula> GNet), a learning-based approach for point cloud completion. In <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{P}^{2}$</tex-math></inline-formula> GNet, we use a feature disentangle encoder to obtain the global feature, and missing code and novel view partial point cloud are generated conditioned on the view-related missing code. To better aggregate partial point clouds, an attentive sampling module is proposed to sample multiple partial point clouds into the final complete result. Extensive experiments on several public benchmarks demonstrate that our <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{P}^{2}$</tex-math></inline-formula> GNet outperforms state-of-the-art point cloud completion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call