Abstract

With the development of RGB-D sensors, a new alternative to generation of 3D maps is appeared. First, features extracted from color and depth images are used to localize them in a 3D scene. Next, Iterative Closest Point (ICP) algorithm is used to align RGB-D frames. As a result, a new frame is added to the dense 3D model. However, the spatial distribution and resolution of depth data affect to the performance of 3D scene reconstruction systems based on ICP. In this paper we propose to divide the depth data into sub-clouds with similar resolution, to align them separately, and unify in the entire points cloud. The presented computer simulation results show an improvement in accuracy of 3D scene reconstruction using real indoor environment data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call