Abstract
Vision-inertial navigation offers a promising solution for aircraft to estimate ego-motion accurately in environments devoid of Global Navigation Satellite System (GNSS). However, existing approaches have limited adaptability for fixed-wing aircraft with high maneuverability and insufficient visual features, problems of low accuracy and subpar real-time arise. This paper introduces a novel vision-inertial heterogeneous data fusion methodology, aiming to enhance the navigation accuracy and computational efficiency of fixed-wing aircraft landing navigation. The visual front-end of the system extracts multi-scale infrared runway features and computes geo-reference runway image as observation. The infrared runway features are recognized efficiently and robustly by a lightweight end-to-end neural network from blurry infrared images, and the geo-reference runway is generated through projection of the runway’s prior geographical information and prior pose. The fusion back-end of the navigation system is the Covariance Feedback Control based Cubature Kalman Filter (CFC-CKF) framework, which tightly integrates visual observations and inertial measurements for zero-drift pose estimation and curbs the effect of inaccurate kinematic noise statistics. Finally, real flight experiments demonstrate that the algorithm can estimate the pose at a frequency of 100 Hz and fulfill the navigation accuracy requirements for high-speed landing of fixed-wing aircraft.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.