Abstract

This paper presents a method of building 3D indoor maps using the Kinect (RGB-D cameras) in which a depth camera and a color camera are lined up. We first extract SURF (Sped Up Robust Features) from the input color images, and match the current features with the previous ones. The matched features are then transformed to 3D coordinate points data using depth information. The 3D points data sets of two consequent images are combined by the ICP (Iterative Closest Points) algorithm to estimate the camera pose, and finally a 3D map is built. To show the effectiveness of the presented method, the map accuracy has been evaluated through comparison of the real environment and the 3D map built during indoor traveling of a Kinect equipped mobile robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call