Abstract

In the field of 3D vision, 3D point cloud completion is a crucial task in many practical applications. Current methods use Transformer's Encoder-Decoder framework to predict the missing part of the point cloud features at low resolution, which does not fully utilize the feature information at multiple resolutions and can result in the loss of the object's geometric details. In this paper, we present a novel point cloud completion method, CarvingNet, which, to the best of our knowledge, is the first to apply the U-Net architecture to the point cloud completion task by operating directly on unordered point cloud features at multiple resolutions. Firstly, we gradually expand the receptive field and use cross-attention to purify the features of the missing part of the point cloud at each resolution and to generate the contour features of the complete point cloud at the last obtained resolution. Then, we gradually reduce the receptive field and use cross-attention to refine the features of the complete point cloud at each resolution and to generate the features of the complete point cloud with rich details at the last obtained resolution. To obtain point cloud features at different resolutions, we specifically design the up-sampling module and down-sampling module for disordered point cloud features. Furthermore, we improve the FoldingNet network to make it more suitable for generating high-quality dense point clouds. The experimental results demonstrate that our proposed CarvingNet achieves the performance of existing state-of-the-art methods on the ShapeNet-55, ShapeNet-34, and KITTI benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call