Abstract

3D reconstruction is an important topic in the field of the emerging applications such as smart robotics, virtual reality (VR), augmented reality (AR), and autonomous driving. RGB-D simultaneous localization and mapping (SLAM) technique is widely used in the reconstruction process. However, low light and low textured environment often results in insufficient point features and fails the reconstruction. To address this problem, we propose a robust RGB-D SLAM system using high dynamic range (HDR) image information called HDR-based SLAM. The deep learning based HDR generation method is adopted to map a single low dynamic range (LDR) image into a radiance map which is normalized to exclude the influence of exposure time. We retrained the ORB descriptor patch to fit the normalized radiance maps in the feature matching step. The proposed method can improve the quantitative camera trajectory accuracy and qualitative result of geometry reconstruction. Experimental results show that the proposed method has better performance compared to that of the standard range imaging SLAM under challenging low light environment, which helps expand the applicability of 3D reconstruction system.

Highlights

  • On account of its wide range of applications, 3D scene reconstruction has become one of the most important and active research topics in the field of computer vision over the past few years

  • Unlike the previous works, which are based on dense-simultaneous localization and mapping (SLAM) systems, we propose a feature-based high dynamic range (HDR)-SLAM, and incorporate it into the 3D reconstruction pipeline to improve the reconstructed results under low light environments

  • This paper has presented a robust normalized HDR-based 3D reconstruction pipeline to reconstruct challenging low light scenes scanned with a consumer RGB-D depth camera

Read more

Summary

Introduction

On account of its wide range of applications, 3D scene reconstruction has become one of the most important and active research topics in the field of computer vision over the past few years. Many methods are proposed for robust camera tracking and efficient volumetric integration in 3D reconstruction. Visual simultaneous localization and mapping (SLAM) can estimate camera motion and reconstruct a 3D scene simultaneously. Feature-based methods extract a sparse set of points from each frame and match them temporally by their feature descriptors. In 3D reconstruction, tracking objectives (camera pose estimation) is one of the most important steps in the whole pipeline. Direct methods extract all the geometry or photometric information to find relative camera pose through minimizing the photometric error while feature-based methods that extract and match features from color images. Kinect-Fusion is the classic work for direct methods that the depth frame is aligned to a global volumetric model and the iterative closest point (ICP) algorithm is used to estimate the camera pose [20]. KinectFusion have some limitations in terms of drift error, high computations and small mapping space

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.