Abstract

Advanced sensor systems, exploring high integrity and multiple sensorial modalities, have been significantly increasing the capabilities of autonomous vehicles and enlarging their application potential. The article describes two relevant sensors for mobile robot navigation-active vision systems and inertial sensors. Vision and inertial sensing are two sensory modalities that can be explored for navigation. The article presents our results on the use and integration of those two modalities. In a first example we present a computational solution for the problem of visual based guidance of a moving observer, by detecting the orientation of the cameras set that maximises the valve of visual information. The algorithm explores the geometric properties of log-polar mapping. The second example, relies on the integration of inertial and visual information to defect the regions in the scene that we can drive a mobile platform: in our case the ground plane. The solution is based on information about the scene that could be obtained during a process of the visual fixation, complemented by the information provided by inertial sensors. The tests were performed with a mobile platform equipped with one active vision system and inertial sensors. The paper presents our results on simulation of visual behaviours for navigation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.