Abstract

Taxi cruising route planning has attracted considerable attention, and relevant studies can be broadly categorized into three main streams: recommending one or multiple areas, providing a detailed cruising route, and deriving the optimal routing policy. However, these studies depend on accurate pick-up/drop-off information, and seldom pay attention to cruising speed planning. In view of the rapid development of autonomous taxis, this study proposes AdaBoost-Bagging maximum entropy deep inverse reinforcement learning to learn cruising policy from experienced taxi drivers’ trajectories. Moreover, we develop a trajectory-based self-attention bidirectional LSTM model to adjust cruising speeds on different roads. Numerical experiments using real taxi trajectories in Chengdu, China demonstrate the effectiveness of our approach in learning taxi drivers’ policies and improving taxis’ operational efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call