Abstract

Growing spatiotemporal imbalance in supply and demand in ride-hailing services has been arousing concern. A variety of measures in the perspective of ride-hailing services per se is studied. However, approaching the imbalance problem in the broader perspective of intermodal mobility, which integrates different modes of passenger transportation in a single trip and aims to overcome limitations of any unimodal mobility, remains to be explored. This paper aims to investigate the potential of introducing intermodal mobility options to balance ride-hailing services. We first identify the importance of the availability decision on intermodal mobility options. Then we formulate the availability decision problem as a Markov decision process (MDP). Due to its convoluted system dynamics and large state space, we cast the intractable MDP into a reinforcement learning (RL) problem to approximately learn the availability policy. To stabilize the learning processes, we model the intermodal ride-hailing services as a stochastic queueing network and tailor a family of state-of-the-art RL algorithms to iteratively evaluate and improve the availability policy. Lastly, we test this optimization framework in a large-scale intermodal mobility scenario calibrated with real-world trip data. Results show that the learned availability policy can significantly dissipate riders’ queue and improve the service rates towards more balanced supply and demand.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call