Abstract
SUMMARYThe research, described in this paper, concerns the robot indoor navigation, emphasizing the aspects of sensor model and calibration, environment representation, and self-localization. The main point is that combining all of these aspects, an effective navigation system is obtained. We present a model of the catadioptric image formation process. Our model simplifies the operations needed in the catadioptric image process. Once we know the model of the catadioptric sensor, we have to calibrate it with respect to the other sensors of the robot, in order to be able to fuse their information. When the sensors are mounted on a robot arm, we can use the hand-eye calibration algorithm to calibrate them. In our case the sensors are mounted on a mobile robot that moves over a flat floor, thus the sensors have less degrees of freedom. For this reason we develop a calibration algorithm for sensors mounted on a mobile robot. Finally, combining all the previous results and a scan matching algorithm that we develop, we build 3D maps of the environment. These maps are used for the self-localization of the robot and to carry out path following tasks. In this work we present experiments which show the effectiveness of the proposed algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.