Abstract

AbstractVisual information is important in final approach and landing phases for an approaching aircraft, it presents supplementary source for navigation system, and provides backup guidance when radio navigation fails, or even supports a complete vision‐based landing. Relative position and attitude can be solved from the runway features in the image. Traditional runway detection methods have high latency and low accuracy, which is unable to satisfy the requirements for a safe landing. This paper proposes a real‐time runway detection model, efficient runway feature extractor (ERFE), based on deep convolutional neural network, generating semantic segmentation and feature lines output. In order to evaluate the model's effectiveness, a benchmark is proposed to calculate the actual error between predicted feature line and ground truth one. A novel runway dataset which is based on pictures from Microsoft Flight Simulator 2020 (FS2020), is also proposed in this paper to train and test the model. The dataset will be released at https://www.kaggle.com/datasets/relufrank/fs2020‐runway‐dataset. ERFE shows excellent performance in FS2020 dataset, it gives satisfactory results even for real runway images excluded from our dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call