Abstract

Point cloud data in the real world is often affected by occlusion and light reflection, leading to incompleteness of the data. Large-region missing point clouds will cause great deviations in downstream tasks. A dual feature fusion network (DFF-Net) is proposed to improve the accuracy of the completion of a large missing region of the point cloud. First, a dual feature encoder is designed to extract and fuse the global and local features of the input point cloud. Subsequently, a decoder is used to directly generate a point cloud of missing region that retains local details. In order to make the generated point cloud more detailed, a loss function with multiple terms is employed to emphasise the distribution density and visual quality of the generated point cloud. A large number of experiments show that the authors’ DFF-Net is better than the previous state-of-the-art methods in the aspect of point cloud completion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call