Abstract

This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information are integrated to estimate the position, velocity and attitude of the aircraft during landing. Homography between the synthetic image and the real image which implicates the camera pose deviations is created as vision measurement. To accurately extract real runway features, the current results of runway detection are used as the prior knowledge for the next frame detection. To avoid possible homography decomposition solutions, it is directly converted to a vector and fed to the SR_UKF. Moreover, the proposed navigation system is proven to be observable by nonlinear observability analysis. Last but not least, a general aircraft was elaborately equipped with vision and inertial sensors to collect flight data for algorithm verification. The experimental results have demonstrated that the proposed method could be used for the precise landing of commercial aircraft in low visibility and GPS-denied environments.

Highlights

  • Landing is the most accident-prone phase of flight for both military and civil aircraft

  • At Airbus [23,24,25] designed two nonlinear observers based on a high gain approach and sliding mode theory and applied them to a vision-based solution for civil aircraft landing on an unknown runway

  • Because the homography matrix contains the deviation of aircraft pose, four groups of possible solutions can be obtained by decomposing the homography matrix according to the traditional method [40,41], and a set of solutions which are closest to the true value, i.e., the deviation of aircraft pose, can be selected by prior knowledge as UKF measurement

Read more

Summary

Introduction

Landing is the most accident-prone phase of flight for both military and civil aircraft. Gui et al [18] proposed an airborne vision-based navigation approach for UAV accuracy landing based on artificial markers This method needs to install a visible light camera integrated with a DSP processor on the UAV and place four infrared lamps on the runway. At Airbus [23,24,25] designed two nonlinear observers based on a high gain approach and sliding mode theory and applied them to a vision-based solution for civil aircraft landing on an unknown runway This method does not utilize inertial measurements with high update. Ruchanurucks et al [26] used an Efficient Perspective-n-Point (EPnP) solution to estimate relative pose for an automatic aided landing system for landing a fixed-wing UAV on a runway The accuracy of this method is susceptible to runway detection errors.

Methodology
Framework of Infrared-Inertial Landing Navigation
Inthe order land commercial airplane below
Approach
Framework proposed approach
Then the visual-inertial
Before
Projection
Visual-Inertial Navigation
Process Modeling
Vision Measurement Model
Other Observations
Observability
Nonlinear Observability
Observability Analysis
Experiments Preparation
The embedded
Runway Detection Experiment
Motion Estimation Experiment
12. Errors of motion estimation:
60–47 ft Method
Height
Findings
Conclusions and Future Works
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call