Abstract

As the number of travellers in airports increase, the load on the Baggage Handling Systems naturally gets higher. To accommodate this, airports can either expand or optimize their Baggage Handling System. Therefore, capacity is a common parameter to evaluate Baggage Handling Systems, and methods that can increase the capacity are highly valued within the airport industry. Previous work has shown that Deep Reinforcement Learning methods can be applied to increase the system capacity when a high load is constantly applied. It is, however, still not clear, how well such Deep Reinforcement Learning agents perform when the load of the system can change according to distributed flight schedules and realistic distributions of incoming baggage. In this work, we apply Deep Reinforcement Learning to a simulated Baggage Handling System with flight schedules and a distribution of incoming baggage generalized from data from a real airport. As opposed to previous work, we allow empty baggage totes to be stored at the entry point until new baggage arrives. The centralized Deep Reinforcement Learning agent must learn to balance the number of baggage totes in the entry queue, while also learning optimal routing strategies, ensuring that all bags meet their scheduled departure times. The performance is measured by the average number of delivered bags and the average number of rush bags that occurred in the example environment. We find that by using Deep Reinforcement Learning in this type of congested system with scheduled departures, we can reduce the number of rush bags, compared to a dynamic shortest path method with deadlock avoidance, resulting in a higher number of delivered bags in the system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call