Abstract

Dense 3D reconstruction is required for robots to safely navigate or perform advanced tasks. The accurate depth information of the image and its pose are the basis of 3D reconstruction. The resolution of depth maps obtained by LIDAR and RGB-D cameras is limited, and traditional pose calculation methods are not accurate enough. In addition, if each image is used for dense 3D reconstruction, the dense point clouds will increase the amount of calculation. To address these issues, we propose a 3D reconstruction system. Specifically, we propose a depth network of contour and gradient attention, which is used to complete and correct depth maps to obtain high-resolution and high-quality depth maps. Then, we propose a method of fusion of traditional algorithms and deep learning for pose estimation to obtain accurate localization results. Finally, we adopt the method of autonomous selection of keyframes to reduce the number of keyframes, the surfel-based geometric reconstruction is performed to reconstruct the dense 3D environment. On the TUM RGB-D, ICL-NIUM, and KITTI datasets, our method significantly improves the quality of the depth maps, the localization results, and the effect of 3D reconstruction. At the same time, we have also accelerated the speed of 3D reconstruction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.