Abstract
In this paper, we are interesting in inferring 3D pose estimation of aircraft object leveraging 2D key-points localization. Monocular vision based pose estimation for aircraft can be widely utilized in airspace tasks like flight control system, air traffic management, autonomous navigation and air defense system. Nonetheless, prior methods using directly regression or classification can not meet the requirements of high precision in aircraft pose estimation context, other approaches using PnP algorithms that need additional information such as template 3D model or depth as prior knowledge. These methods do not exploit to full advantage the correlation information between 2D key-points and 3D pose. In this paper, we present a multi-branch network, named AirPose network, using convolutional neural network to address 3D pose estimation based on 2D key-points information. In the meantime, a novel feature fusion method is explored to enable orientation estimation branch adequately exploit key-points information. Our feature fusion method significantly decreases 3D pose estimation error also avoids the involvement of RANSAC based PnP algorithms. To address the problem that there is no available dedicated aircraft 3D pose dataset for training and testing, we build a visual simulation platform on Unreal Engine 4 applying domain randomization (DR) skill, named AKO platform, which generates aircraft images automatically labeled with 3D orientation and key-points location. The dataset is called AKO dataset. We implement a series of ablation experiments to evaluate our framework for aircraft object detection, key-points localization and orientation estimation on AKO dataset. Experiments show that our proposed AirPose network leveraging AKO dataset can achieve convincing results for each of the tasks.
Highlights
3D Pose estimation of aircraft object is a challenging problem facilitated by the well-developed aircraft detection algorithms in very recent years [1]–[5]
As a higher level task based on aircraft detection, 3D aircraft pose estimation can be widely utilized in many airspace tasks, such as vision-based flight control system [6], [7], air traffic management [8], autonomous navigation and air defense system
It can be concluded that our simple but effective feature fusion method has greatly improved the accuracy than directly inferring the 3D orientation of object using only feature maps
Summary
3D Pose estimation of aircraft object is a challenging problem facilitated by the well-developed aircraft detection algorithms in very recent years [1]–[5]. As a higher level task based on aircraft detection, 3D aircraft pose estimation can be widely utilized in many airspace tasks, such as vision-based flight control system [6], [7], air traffic management [8], autonomous navigation and air defense system. Compared with infrared sensors and radar system, monocular camera based on visible light can capture images with more details and high-resolution. Deep-learning based methods on visible light images in recent years, monocular visible light sensor becomes an effective supplement for airspace situational awareness. We break down the 3D aircraft pose estimation problem into three subtasks: aircraft object detection, 2D key-points localization and 3D aircraft orientation estimation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.