Abstract

The 4D flight trajectory of an aircraft consists of three spatial dimensions and the fourth dimension of time. To accurately predict trajectories, we need to simultaneously extract both Spatial and Temporal features. Due to increasing Trajectory Based Operations at major airports, this prediction is increasingly important in air traffic management. This paper proposes a novel hybrid Convolutional Neural Network (CNN) with Gated Recurrent Unit (GRU) deep model structure for 4D flight trajectory prediction. In general, such a model would be used for long-term fight trajectory planning, such as over a one-month period. There is a CNN part for spatial feature extraction and a GRU part for temporal feature extraction. Automatic Dependent Surveillance-Broadcast (ADS-B) historical data, provided by open access to the Opensky-network, is used as a research dataset for experiments and comparison. The purpose of this paper is to provide a comparison between a novel CNN-GRU with separated input and with unified input. The experimental analysis shows that the proposed approach has the advantage of more efficiency and accuracy over the unified neural network. The experimental results show with the proposed CNN-GRU hybrid model with Spatio-Temporal separated inputs, Mean Absolute Error is reduced by 35.85%, and the Root Mean Squared Error is reduced by 20.48% compared to the CNN-GRU hybrid model with unified input. The proposed novel approach for the CNN-GRU deep model provides support for deep hybrid models for trajectory prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call