Abstract

Abstract. RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method

Highlights

  • Detailed 3D modeling of indoor environments is an important technology for many applications, such as indoor mapping, indoor positioning and navigation, and semantic mapping (Henry et al, 2014)

  • To register the 3D models from color image sequences to the models from depth information, a robust registration method is proposed by establishing the geometric relationship between them

  • It should be noted that some color deviation may exist in the RGB images collected by the RGB-D sensor due to inaccurate perception of color in the indoor environment with the ever-changing light

Read more

Summary

INTRODUCTION

Detailed 3D modeling of indoor environments is an important technology for many applications, such as indoor mapping, indoor positioning and navigation, and semantic mapping (Henry et al, 2014). Color images are captured with off-the-shelf digital cameras and the rich visual information can be used for loop closure detection (Konolige and Agrawal, 2008; Nistér, 2004), it is hard to obtain enough points for dense modeling through regular photogrammetric techniques, especially in dark environments or poorly textured areas (Henry et al, 2010; Kerl et al, 2013; Triggs et al, 2000). We introduce an enhanced RGB-D mapping approach for detailed 3D modeling of large-range indoor environments by combining the RGB image sequences with the depth information. A robust automatic registration method is proposed to register the 3D scene produced by the RGB image sequences and the model from the depth sensor together.

LITERATURE REVIEW
Overview of the Enhanced RGB-D Mapping System
Camera Calibration
Relative Motion Estimation
Key-Point Detection and Matching
Camera Pose Estimation
Robust Registration of Depth-based and Image-based Models
Camera Model for Depth Images
Scale Recovery
Rigid Transformation Recovery
Absolute Camera Trajectory Recovery
Datasets
Experimental Results and Analysis
SUMMARY AND CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call