Abstract

Most current point cloud super-resolution reconstruction requires huge calculations and has low accuracy when facing large outdoor scenes; a Dense Feature Pyramid Network (DenseFPNet) is proposed for the feature-level fusion of images with low-resolution point clouds to generate higher-resolution point clouds, which can be utilized to solve the problem of the super-resolution reconstruction of 3D point clouds by turning it into a 2D depth map complementation problem, which can reduce the time and complexity of obtaining high-resolution point clouds only by LiDAR. The network first utilizes an image-guided feature extraction network based on RGBD-DenseNet as an encoder to extract multi-scale features, followed by an upsampling block as a decoder to gradually recover the size and details of the feature map. Additionally, the network connects the corresponding layers of the encoder and decoder through pyramid connections. Finally, experiments are conducted on the KITTI deep complementation dataset, and the network performs well in various metrics compared to other networks. It improves the RMSE by 17.71%, 16.60%, 7.11%, and 4.68% compared to the CSPD, Spade-RGBsD, Sparse-to-Dense, and GAENET.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.