Abstract

For many vision tasks and intelligent robotics applications, it is common that the scanned 3D point cloud is not complete, so inferring from the residual defect shape to the intact shape becomes an essential task. Previous 3D completion neural network models generally use voxel-based or point-based methods to learn and process 3D data. For the voxel-based models, the computational cost and memory increase exponentially with the improvement of input resolution, and fine-grained features cannot be guaranteed in the completed point cloud due to limited computational resources. Point-based models suffer from the lack of precision in feature acquisition and crude reconstruction of complicated structures, making it extremely hard to accomplish elaborated semantic shapes. Combining advantages of voxel-based and point-based feature extraction through the high-frequency feature fusion module, this paper proposes a dual-scale point cloud completion network called DSNet, which performs global feature analysis at the voxel scale, and local feature analysis at the point cloud scale. The fused features are then integrated into the decoding and generation process, so as to complete the point cloud completion task from coarse to fine. Experimental results, at both quantitative and qualitative perspectives, in several prevailing datasets demonstrate that our approach surpasses state-of-the-art point cloud completion networks and has a good generalization performance. Code is available at https://github.com/engqing/DSNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call