Abstract

In this work we investigate the use of hierarchical collaborative reinforcement learning methods (H-CMARL) for the computation of joint policies to resolve congestion problems in the Air Traffic Management (ATM) domain. In particular, to address cases where the demand of airspace use exceeds capacity, we consider agents representing flights, who need to decide jointly on ground delays at the pre-tactical stage of operations, towards executing their trajectories while adhering to airspace capacity constraints. In doing so, agents collaborate, applying collaborative multi-agent reinforcement learning methods. Specifically, starting from a multiagent Markov Decision Process problem formulation, we introduce a flat and a hierarchical collaborative multiagent reinforcement learning method at two levels (the ground and an abstract one). To quantitatively assess the quality of solutions of the proposed approaches and show the potential of the hierarchical method in resolving the demand-capacity balance problems, we provide experimental results on real-world evaluation cases, where we measure the average delay of flights and the number of flights with delays.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.