Abstract

Mobile autonomous robots have finally emerged from the confined spaces of structured and controlled indoor environments. To fulfill the promises of ubiquitous robotics in unstructured outdoor environments, robust navigation is a key requirement. The research in the simultaneous localization and mapping (SLAM) community has largely focused on optical sensors to solve this problem, and the fact that the robot is a physical entity has largely been ignored. In this paper, a hierarchical SLAM framework is proposed that takes the interaction of the robot with the environment into account. A sequential Monte Carlo filter is used to generate local map segments with a combination of visual and embodied data associations. Constraints between segments are used to generate globally consistent maps with a focus on suitability for navigation tasks. The proposed method is experimentally verified on two different outdoor robots. The results show that the approach is viable and that the rich modeling of the robot with its environment provides a new modality with the potential for improving existing visual methods and extending the availability of SLAM in domains where visual processing alone is not sufficient.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.