Abstract

The electric-scooter (e-scooter) has become a popular mode of transportation with the proliferation of shared mobility services. As with other shared mobility services, the operation of the e-scooter sharing service has a recurring problem of imbalance in supply and demand. Various strategies have been studied to resolve the imbalance problems, including demand prediction and relocation strategies. However, the difficulty of accurately predicting the fluctuating demand and the excessive cost-labor consumption of relocation are major limitations of these strategies. As a remedy, we propose a deep reinforcement learning algorithm that suggests price incentives and an alternative rental location for users who find it difficult to acquire e-scooters at their desired boarding locations. A proximal policy optimization algorithm considering temporal dependencies is applied to develop a reinforcement learning agent that allocates the given initial budget to provide price incentives in a cost-efficient manner. We allow the proposed algorithm to re-use a portion of the operating profit as price incentives, which brings higher efficiency compared to the same initial budget. Our proposed algorithm is capable of reducing as much as 56% of the unmet demands by efficiently distributing price incentives. The result of the geographical analysis shows that the proposed algorithm can provide benefits to both users and service providers by promoting the use of idle e-scooters with a price incentive. Through experimental analysis, optimal budget, i.e., the most efficient initial budget, is suggested, which can contribute to e-scooter operators developing efficient e-scooter sharing services.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call