Abstract
This paper addresses the issues of unstructured terrain modelling for the purpose of motion planning in an insect-like walking robot. Starting from the well-established elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities we extend this idea to a multi-sensor system consisting of a 2D pitched laser scanner, and a stereo vision camera. We propose an extension of the usual, ad hoc elevation grid update mechanism by incorporating formal treatment of the spatial uncertainty, and show how data from different sensors can be fused into a consistent terrain model. Moreover, this paper describes a novel method for filling-in missing areas of the elevation grid, which appear due to erroneous data or the line-of-sight constraints of the sensors. This method takes into account the uncertainty of the multi-sensor data collected in the elevation grid.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.