Abstract

Dynamic pricing is essential for airline revenue management, requiring quick adaptation to fluctuating market environments and complex customer behaviors. This study addresses the Multi-Flight Dynamic Pricing (MFDP) problem, which presents unique challenges due to interdependent demand between multiple flights and high dimensionality. Traditional studies often assume that the demand function modeling customer behavior is either known in advance or follows a predefined structure, failing to capture the dynamic nature of pricing decisions. To fill this gap, we develop deep reinforcement learning (DRL) algorithms—Deep Q-Network (DQN), Advantage Actor-Critic (A2C), Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO). By formulating the MFDP problem as a Markov Decision Process (MDP), we design an innovative utility function for the Multinomial Logit (MNL) model that captures realistic features of the airline market, such as competition from high-speed rail, the effect of reference fares, and travel time. We compare the performance of our DRL algorithms with traditional algorithms, including Dynamic Programming (DP), Price Pooling (PP), Inventory Pooling (IP), and Inventory and Price Pooling (IPP). Our experiments demonstrate that DRL algorithms alleviate the curse of dimensionality faced by traditional algorithms, expedite the learning process, and deliver satisfactory performance without relying on predefined demand functions. Among these algorithms, TRPO shows superior performance, achieving 99% of the theoretical optimal revenue, proving its adaptability and stability in dynamic pricing applications. We also highlight the importance of considering the null price in the action space of MFDP problems. The larger the market scale, the more pronounced the effect of the null price in accelerating RL algorithm convergence, leading to more efficient computational resource utilization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.