Abstract

In this paper, we present an algorithm for robot simultaneous localization and mapping (SLAM) using a Kinect sensor, which is a red-green-blue and depth (RGB-D) sensor. The distortions of the RGB and depth images are calibrated before the sensor is used as a measuring device for robot navigation. The calibration procedure includes the correction of the RGB image as well as alignment of the RGB lens with the depth lens. In SLAM tasks, the speeded-up robust features (SURFs) are detected from the RGB image and used as landmarks for building the environment map. The depth image further provides the stereo information to initialize the three-dimensional coordinates of each landmark. Meanwhile, the robot estimates its own state and landmark locations using the extended Kalman filter (EKF). Two SLAM experiments have been carried out in this study and the results showed that the Kinect sensors could provide reliable measurement information for mobile robots navigating in unknown environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.