Abstract

Online ride-hailing platforms have reduced significantly the amounts of the time that taxis are idle and that passengers spend on waiting. As a key component of these platforms, the fleet management problem can be naturally modeled as a Markov Decision Process, which enables us to use the deep reinforcement learning. However, existing studies are proposed based on simplified problem settings that fail to model the complicated supply-dynamics and restrict the performance in the real traffic environment. In this article, we propose a supply-demand-aware deep reinforcement learning algorithm for taxi dispatching, where we use a deep Q-network with action sampling policy, called AS-DQN, to learn an optimal dispatching policy. Furthermore, we utilize a dueling network architecture, called AS-DDQN, to improve the performance of AS-DQN. Extensive experiments on real-world datasets offer insight into the performance of our model and show that it is capable of outperforming the baseline approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call