Abstract

In this paper, we propose a novel mobile robot visual localization method consisting of two processing stages: map construction and visual localization. In the map construction stage, both laser range finder and camera are used to construct a composite map. Depth data are collected from laser range finder while distinct features of salient feature points are gathered from camera provided images. In the visual localization stage, only camera is used and the robot system detects feature points from camera provided images, computes features of the detected feature points, matches them with the features recorded in previously constructed composite map, and decides location of the robot. Using this method, a robot can locate its own position effectively without expensive laser range finder so that greater acceptance can be expected due to affordability. With the proposedmethod, several experiments have been performed. The matching accuracy of proposed feature extraction achieves 97.79%, compared with 92.96% of SURF. Experiment results show that our method not only reduces hardware cost of robot localization, but also offers high accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.