Abstract

Altitude estimation is one of the fundamental tasks of unmanned aerial vehicle (UAV) automatic navigation, where it aims to accurately and robustly estimate the relative altitude between the UAV and specific areas. However, most methods rely on auxiliary signal reception or expensive equipment, which are not always available, or applicable owing to signal interference, cost or power-consuming limitations in real application scenarios. In addition, fixed-wing UAVs have more complex kinematic models than vertical take-off and landing UAVs. Therefore, an altitude estimation method which can be robustly applied in a GPS denied environment for fixed-wing UAVs must be considered. In this paper, we present a method for high-precision altitude estimation that combines the vision information from a monocular camera and poses information from the inertial measurement unit (IMU) through a novel end-to-end deep neural network architecture. Our method has numerous advantages over existing approaches. First, we utilize the visual-inertial information and physics-based reasoning to build an ideal altitude model that provides general applicability and data efficiency for neural network learning. A further advantage is that we have designed a novel feature fusion module to simplify the tedious manual calibration and synchronization of the camera and IMU, which are required for the standard visual or visual-inertial methods to obtain the data association for altitude estimation modeling. Finally, the proposed method was evaluated, and validated using real flight data obtained during a fixed-wing UAV landing phase. The results show the average estimation error of our method is less than 3% of the actual altitude, which vastly improves the altitude estimation accuracy compared to other visual and visual-inertial based methods.

Highlights

  • The process of estimating the relative altitude between the unmanned aerial vehicle (UAV) and a specific area is usually known as altitude estimating

  • Considering the issues above and inspired by previous works [25,26,27,28,29], this paper explores the integration of physical-based reasoning into modern convolutional neural network (CNN)-LSTM-based models and the fusion of different types of features to further improve altitude estimation for fixed-wing UAV landing

  • We first introduce the real datasets used in the experiments and the details of experimental implementation, present the altitude estimation results on the different sequence data during the auto-landing phase from the real fixed-wing UAV

Read more

Summary

Introduction

The process of estimating the relative altitude between the UAV and a specific area is usually known as altitude estimating. System (INS), barometric altimeter and other active ranging sensors. In most cases, INS methods need to compensate with other active ranging sensors to estimate altitude. The barometric altimeter is the conventional altimetric sensor for UAVs in high-altitude environments, but when the UAV is close to the ground there are too many factors (such as weather, local air temperature and humidity) which impact their precision of altitude estimation. Altitude estimation provided by these methods is typically inaccurate [5] or needs expensive and high-power consumption equipment to guarantee the estimation precision. At this time, the advantages of visual-based altitude estimation methods are important

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.