Abstract

With the rapid growth of flight flow, the workload of controllers is increasing daily, and handling flight conflicts is the main workload. Therefore, it is necessary to provide more efficient conflict resolution decision-making support for controllers. Due to the limitations of existing methods, they have not been widely used. In this paper, a Deep Reinforcement Learning (DRL) algorithm is proposed to resolve multi-aircraft flight conflict with high solving efficiency. First, the characteristics of multi-aircraft flight conflict problem are analyzed and the problem is modeled based on Markov decision process. Thus, the Independent Deep Q Network (IDQN) algorithm is used to solve the model. Simultaneously, a ‘downward-compatible’ framework that supports dynamic expansion of the number of conflicting aircraft is designed. The model ultimately shows convergence through adequate training. Finally, the test conflict scenarios and indicators were used to verify the validity. In 700 scenarios, 85.71% of conflicts were successfully resolved, and 71.51% of aircraft can reach destinations within 150 s around original arrival times. By contrast, conflict resolution algorithm based on DRL has great advantages in solution speed. The method proposed offers the possibility of decision-making support for controllers and reduce workload of controllers in future high-density airspace environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.