Abstract

This paper deals with the integration problem for GNSS/INS integration navigation with the aid of vision sensor. The basic principle of visual navigation is presented and discussed. Given the triple sampling rates of sensors, we design a vision-aided GNSS/INS fusion method based on the sequential Kalman filter structure. The sequential Kalman filters are mixed with both extended Kalman filter (EKF) and linear Kalman filter (LKF). The fusion computation contains three layers. (i) The first layer is INS mechanization and its 15-dimensional error state model, including three-dimensional position, velocity, attitude (PVA), gyroscope drift and accelerometer bias. (ii) The second layer is monocular camera model containing a sliding window where Fast features will be extracted and tracked within a set of corrected frames. If the parallax is sufficient, then use triangulation and Gauss-Newton optimization to estimate their coordinates in navigation frame. Considering each feature tracked by multiple frames, the state of monocular camera is naturally constrained with a nonlinear model via each landmark. By linearizing the nonlinear model of monocular camera, we apply the EKF to estimate the error state and compensate for INS PVA. (iii) The third is the lowest sampling-rate GNSS PV model. With a corrected INS state based on (ii), a traditional GNSS/INS loosely integration will be implemented through LKF. Finally, the real dataset (KITTI) of a land vehicle is employed to demonstrate the efficiency of our proposed fusion method. The results show the competitive accuracy and reliability of our method. With the aid of VO, the performance of GNSS/INS is significantly improved. Especially for these GNSS challenging environments, corrected by monocular camera, INS can endure a much longer duration without GNSS signals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call