Abstract

Growing spatiotemporal imbalance in supply and demand in ride-hailing services has been arousing concern. A variety of measures in the perspective of ride-hailing services per se is studied. However, approaching the imbalance problem in the broader perspective of intermodal mobility, which integrates different modes of passenger transportation in a single trip and aims to overcome limitations of any unimodal mobility, remains to be explored. This paper aims to investigate the potential of introducing intermodal mobility options to balance ride-hailing services. We first identify the importance of the availability decision on intermodal mobility options. Then we formulate the availability decision problem as a Markov decision process (MDP). Due to its convoluted system dynamics and large state space, we cast the intractable MDP into a reinforcement learning (RL) problem to approximately learn the availability policy. To stabilize the learning processes, we model the intermodal ride-hailing services as a stochastic queueing network and tailor a family of state-of-the-art RL algorithms to iteratively evaluate and improve the availability policy. Lastly, we test this optimization framework in a large-scale intermodal mobility scenario calibrated with real-world trip data. Results show that the learned availability policy can significantly dissipate riders’ queue and improve the service rates towards more balanced supply and demand.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.