Abstract

In this paper, we propose a new method for 3D map reconstruction using the Kinect sensor based on multiple ICP. The Kinect sensor provides RGB images as well as depth images. Since the depth and RGB color images are captured by one Kinect sensor with multiple views, each depth image should be related to the color image. After matching of the images (registration), point-to-point corresponding between two depth images is found, and they can be combined and represented in the 3D space. In order to obtain a dense 3D map of the 3D indoor environment, we design an algorithm to combine information from multiple views of the Kinect sensor. First, features extracted from color and depth images are used to localize them in a 3D scene. Next, Iterative Closest Point (ICP) algorithm is used to align all frames. As a result, a new frame is added to the dense 3D model. However, the spatial distribution and resolution of depth data affect to the performance of 3D scene reconstruction system based on ICP. In this paper we automatically divide the depth data into sub-clouds with similar resolution, to align them separately, and unify in the entire points cloud. This method is called the multiple ICP. The presented computer simulation results show an improvement in accuracy of 3D map reconstruction using real data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call