Abstract

The optimal routing of a single vacant taxi is formulated as a Markov Decision Process (MDP) problem to account for profit maximization over a full working period in a transportation network. A batch offline reinforcement learning (RL) method is proposed to learn action values and the optimal policy from archived trajectory data. The method is model-free, in that no state transition model is needed. It is more efficient than the commonly used online RL methods based on interactions with a simulator, due to batch processing and reuse of transition experiences.The batch RL method is evaluated in a large network of Shanghai, China with GPS trajectories of over 12,000 taxis. The training is conducted with two datasets: one is a synthetic dataset where state transitions are generated in a simulator with a postulated system dynamics model (Yu et al., 2019) whose parameters are derived from observed data; the other contains real-world state transitions extracted from observed taxi trajectories.The batch RL method is more computationally efficient, reducing the training time by dozens of times compared with the online Q-learning method. Its performance in terms of average profit per hour and occupancy rate is assessed in the simulator, against that of a baseline model, the random walk, and an upper bound, generated by the exact Dynamic Programming (DP) method based on the same system model of the simulator. The batch RL based on simulated and observed trajectories both outperform the random walk, and the advantage increases with the training sample size. The batch RL based on simulated trajectories achieves 95% of the performance upper bound with 30-minutes time intervals, suggesting that the model-free method is highly effective. The batch RL based on observed data achieves around 90% of the performance upper bound with 30-minute time intervals, due to the discrepancy between the training and evaluation environments, and its performance in the real world is expected to similarly good since the training and evaluation would be based on the same environment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.