Abstract

Combined vision system is a perspective display concept to enhance a situation awareness of the pilots during aircraft landing, which integrates a real 2-D image captured from forward-looking infrared camera with a synthetic 3-D image derived by the aircraft pose and terrain database. However, the inertial measured errors significantly affect the conformal display of combined vision. This article proposes a novel method for real and synthetic images registration based on visual–inertial fusion. It includes the following key steps: (1) detect and extract the real runway features from forward-looking infrared image; (2) generate the synthetic runway features simultaneously; (3) set up vision measurement model with real and synthetic runway features; (4) integrate inertial data and visual observations in the square-root unscented Kalman filter; (5) create a synthetic 3-D scene by the filtered pose data and integrate it with a real 2-D image. The experimental results demonstrate that our method can guarantee the conformal display of combined vision system in GPS-denied and low visibility conditions.

Highlights

  • The landing is the most accident-prone flight stage for the fixed-wing aircrafts since it needs the aircraft to rapidly descend and brake in a narrow airspace

  • With the rapid development of image processing and infrared sensing, they have been applied to airborne cockpit electronic system to improve flight safety especially in GPS-denied and low visibility conditions

  • As a novel airborne assistant landing means, combined vision system (CVS) can provide an equivalent visual operation ability for the crew with a perspective flight scene[2] during landing. It integrates the real 2-D image captured by forward-looking infrared (FLIR) camera and the synthetic 3-D image derived from the aircraft pose and the terrain database,[3] the superimposed image is

Read more

Summary

Introduction

The landing is the most accident-prone flight stage for the fixed-wing aircrafts since it needs the aircraft to rapidly descend and brake in a narrow airspace. As a novel airborne assistant landing means, combined vision system (CVS) can provide an equivalent visual operation ability for the crew with a perspective flight scene[2] during landing It integrates the real 2-D image captured by forward-looking infrared (FLIR) camera and the synthetic 3-D image derived from the aircraft pose and the terrain database,[3] the superimposed image is. The above methods have achieved remarkable progress in vision landing navigation, they cannot provide accurate aircraft pose parameters with high update rate to support registration of 2-D–3-D images in low visibility condition. We propose to use real and synthetic runway features to create vision cues and integrate them with inertial data in SR-UKF32 to estimate motion errors.

From fDg to fEg
From fEg to fGg
From fCg to fPg
ROI detection
Line segments extraction
Runway line fitting
Vertexes calculation
Initialization
Vision measurement update nh qffiffiffiffiffiffiffiffiffi
Time update
Experiments
49 Â 77 106 Â 214 164 Â 488
Conclusion and future work
Method
Findings
RTCA DO-315B
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.