Abstract

Vision-inertial odometry navigation system is a low-cost, lightweight, continuous and reliable navigation and positioning method. In order to obtain the accurate and reliable navigation information, the navigation system has to confront the challenge of environmental interference. Due to the unavoidable challenges of turning, accelerating ego-motion and nontex-tured, dynamic scene for image processing, there is random interference caused by ego-motion uncertainty, which makes the estimation algorithm divergent and positioning unreliable. The purpose of this paper is to develop a robust vision aided inertial navigation strategy, which can be divided into front end and back end. The front end uses a visual deep learning framework based on recurrent neural network for end-to-end state estimation. The back end applies the extended Kalman filter in vehicle coordinate system, and combines the degree of abnormity measuring the uncertainty of the system online in order to dynamically adjust the filtering method. The experiments using KITTI dataset on the unmanned ground vehicle were tested under the drastic change of vehicle movement state and environment. The results showed that the robust vision-inertial odometry navigation system has robustness and adaptability to resist external in¬terference, and can improve the positioning accuracy of unmanned ground vehicle.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.