Abstract
Unlike micro aerial vehicles, most mobile robots have non-holonomic constraints, which makes lateral movement impossible. Consequently, the vision-based navigation systems that perform accurate visual feature initialization by moving the camera to the side to ensure a sufficient parallax of the image are degraded when applied to mobile robots. Generally, to overcome this difficulty, a motion model based on wheel encoders mounted on a mobile robot is used to predict the pose of a robot, but it is difficult to cope with errors caused by wheel slip or inaccurate wheel calibration. In this study, we propose a robust autonomous navigation system that uses only a stereo inertial sensor and does not rely on wheel-based dead reckoning. The observation model of the line feature modified with vanishing-points is applied to the visual-inertial odometry along with the point features so that a mobile robot can perform robust pose estimation during autonomous navigation. The proposed algorithm, i.e., keyframe-based autonomous visual-inertial navigation (KAVIN) supports the entire navigation system and can run onboard without an additional graphics processing unit. A series of experiments in a real environment indicated that the KAVIN system provides robust pose estimation without wheel encoders and prevents the accumulation of drift error during autonomous driving.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.