Abstract

Autonomous navigation of a mobile robot consists of many basic techniques such as mapping, localization, path planning, collision avoidance, system architecture and so on. Among these, localization is the most important task since a robot must know its pose to reach the desired destination reliably. Localization is a method for estimating the robot pose using the environmental map and the sensor information. Therefore, localization performance increases as the differences between the map and the real environment decrease. Representative examples of map matching based localization are as follows. MCL (Monte Carlo Localization) method [1][2], which robustly estimates the robot pose, compares the information from the sensors mounted on the robot with the environment map. The vision-based SLAM using the SIFT (Scale Invariant Feature Transform) algorithm [3] based on a stereo camera was also proposed [4] [5]. The above localization methods have been applied to many mobile robots and their performances were verified. The localization schemes, however, tend to show poor performance when the map is different from the real environment due to artificial or natural changes in the environment. If the robot can detect such changes occurring in the environment and reflect them on the map, navigation performance can be maintained even for the environmental changes. In this research, a new method for recognizing the environmental changes and updating the current map is proposed. With this approach, the robot can navigate autonomously with high reliability and thus offer better services to humans. Despite the importance of map update, little attention has been paid to the update algorithm of the constructed map. This paper proposes a method for updating the constructed map reliably and simply. The particle filter algorithm [6], which has been used for localization, is adopted for the map update. If the robot recognizes a visual feature, new samples representing the candidates for the robot pose are drawn around the visual feature. After newly drawn samples converge, the similarity between the poses of new samples and those of the current robot samples is evaluated. The pose reliability of the recognized object is calculated by applying the similarity to the Bayesian update formula [7]. Then the object whose pose reliability is below the predetermined value is discarded. On the other hand, the new position of the moved visual feature is registered to the visual feature map if its pose reliability is greater than the predetermined value.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.