Abstract

Commercial airlines use revenue management systems to maximize their revenue by making real-time decisions on the booking limits of different fare classes offered in each of its scheduled flights. Traditional approaches—such as mathematical programming, dynamic programming, and heuristic rule-based decision models—heavily rely on external mathematical models of demand and passenger arrival, choice, and cancelation, making their performance sensitive to the accuracy of these model estimates. Moreover, many of these approaches scale poorly with increase in problem dimensionality. Additionally, they lack the ability to explore and “directly” learn the true market dynamics from interactions with passengers and adapt to changes in market conditions on their own. To overcome these limitations, this research uses deep reinforcement learning (DRL), a model-free decision-making framework, for finding the optimal policy of the seat inventory control problem. The DRL framework employs a deep neural network to approximate the expected optimal revenues for all possible state-action combinations, allowing it to handle the large state space of the problem. Multiple fare classes with stochastic demand, passenger arrivals, and booking cancelations have been considered in the problem. An air travel market simulator was developed based on the market dynamics and passenger behavior for training and testing the agent. The results demonstrate that the DRL agent is capable of learning the optimal airline revenue management policy through interactions with the market, matching the performance of exact dynamic programming methods. The revenue generated by the agent in different simulated market scenarios was found to be close to the maximum possible flight revenues and surpass that produced by the expected marginal seat revenue-b (EMSRb) method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.