Abstract

In this paper, we propose a new method for 3D object reconstruction using an RGB-D sensor. The RGB-D sensor provides RGB images as well as depth images. Since the depth and RGB color images are captured with one sensor of an RGB-D camera placed in different locations, the depth image should be related to the color image. After matching of the images (registration), point-to-point corresponding between two images is found, and they can be combined and represented in the 3D space. In order to obtain a dense 3D map of the 3D object, we design an algorithm for merging information from all used cameras. First, features extracted from color and depth images are used to localize them in a 3D scene. Next, Iterative Closest Point (ICP) algorithm is used to align all frames. As a result, a new frame is added to the dense 3D model. However, the spatial distribution and resolution of depth data affect to the performance of 3D scene reconstruction system based on ICP. The presented computer simulation results show an improvement in accuracy of 3D object reconstruction using real data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call