Abstract

RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

Highlights

  • Detailed 3D modeling of indoor and outdoor environments is an important technology for many tasks such as indoor mapping, indoor positioning and navigation, and semantic mapping [1].Traditionally, there are two main approaches to close-range 3D modeling—terrestrial laser scanningSensors 2016, 16, 1589; doi:10.3390/s16101589 www.mdpi.com/journal/sensors (TLS) and close-range photogrammetry

  • RGB images are captured with off-the-shelf digital cameras and their rich visual information can be used for loop closure detection [2,3], it is hard to obtain enough points for dense modeling through regular photogrammetric techniques, especially in dark environments and poorly textured areas [1,4,5,6]

  • A global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames, and we elaborate the refined relative motion estimation method for RGB images sequence and the robust geometric registration methodology for depth scene and RGB scene is presented

Read more

Summary

Introduction

Detailed 3D modeling of indoor and outdoor environments is an important technology for many tasks such as indoor mapping, indoor positioning and navigation, and semantic mapping [1].Traditionally, there are two main approaches to close-range 3D modeling—terrestrial laser scanningSensors 2016, 16, 1589; doi:10.3390/s16101589 www.mdpi.com/journal/sensors (TLS) and close-range photogrammetry. With TLS technology, the obtained 3D point clouds contain detailed structural information and are well suited for frame-to-frame alignment. RGB images are captured with off-the-shelf digital cameras and their rich visual information can be used for loop closure detection [2,3], it is hard to obtain enough points for dense modeling through regular photogrammetric techniques, especially in dark environments and poorly textured areas [1,4,5,6]. RGB-D sensors have some significant drawbacks with respect to dense 3D mapping These sensors only allow measurement ranges of a limited distance and a limited field of view. This may cause tracking loss due to lack of the spatial structure needed to constrain ICP (iterative closest point) alignments [1]. As the random error of the measurement depth increases with distance from the sensor, only the data acquired within the range from 0 to 3 m to the sensor can be used for mapping applications [11]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call