Abstract

Point clouds is one of popular 3D representations in computer vision and computer graphics. However, due to the sparseness and non-uniformity, raw point cloud from scanning devices cannot applied to down-stream geometry analyzing tasks directly. In this paper, we propose an end-to-end point cloud up-sampling network to reconstruct the dense yet uniform-distributed point clouds. Firstly, we utilize the spatial relationship of local regions and capture point-wise features progressively. We then propose a novel network to aggregate those features from different levels. Finally, we design an up-sampling module which consists of multi-branch convolution units to generate the dense point clouds. We conduct sufficient experiments on currently available public benchmarks. Experimental results show that proposed method has achieved 0.103 and 0.010 performance on Hausdorff distance and Chamfer Distance on VisionAir dataset, in comparison with the baseline towards uniformity, proximity-to-surface and mesh reconstruction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call