Abstract

Aircraft pose estimation is a necessary technology in aerospace applications, and accurate pose parameters are the foundation for many aerospace tasks. In this paper, we propose a novel pose estimation method for straight wing aircraft without relying on 3D models or other datasets, and two widely separated cameras are used to acquire the pose information. Because of the large baseline and long-distance imaging, feature point matching is difficult and inaccurate in this configuration. In our method, line features are extracted to describe the structure of straight wing aircraft in images, and pose estimation is performed based on the common geometry constraints of straight wing aircraft. The spatial and length consistency of the line features is used to exclude irrelevant line segments belonging to the background or other parts of the aircraft, and density-based parallel line clustering is utilized to extract the aircraft’s main structure. After identifying the orientation of the fuselage and wings in images, planes intersection is used to estimate the 3D localization and attitude of the aircraft. Experimental results show that our method estimates the aircraft pose accurately and robustly.

Highlights

  • Since the 3D pose parameters of aircraft could provide a lot of valuable information about the aircraft’s flight status, effective and accurate pose estimation is a key technique in many aerospace applications, such as autonomous navigation [1], auxiliary landing [2], collision avoidance [3], accident analysis, and testing of a flight control system [4,5]

  • On-board monocular, depth, or stereo cameras can be used in on-board vision methods to estimate the relative pose between the aircraft and a particular target or marker, while external vision methods utilize external cameras to acquire the pose of an aircraft from its 2D projected images

  • We propose a novel structure extraction method to identify the orientation of the fuselage and wings of straight wing aircraft in a 2D image without needing 3D models or other datasets

Read more

Summary

Introduction

Since the 3D pose parameters of aircraft could provide a lot of valuable information about the aircraft’s flight status, effective and accurate pose estimation is a key technique in many aerospace applications, such as autonomous navigation [1], auxiliary landing [2], collision avoidance [3], accident analysis, and testing of a flight control system [4,5]. With the development of imaging technology and computer vision, vision-based pose estimation has become a research hotspot, and a lot of methods have been proposed in the literature to estimate the pose of an aircraft using visual sensors. Visual sensors could be successfully applied in aircraft pose estimation since vision-based methods have the advantages of strong anti-interference ability, low cost, and high precision [6]. Vision-based pose estimation methods can be divided into two categories—on-board vision and external vision—depending on the mounting position of the visual sensors. On-board monocular, depth, or stereo cameras can be used in on-board vision methods to estimate the relative pose between the aircraft and a particular target or marker, while external vision methods utilize external cameras to acquire the pose of an aircraft from its 2D projected images. Among the on-board vision methods, a binocular stereovision model established by Chen et al [7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call