Abstract

Recently, the development of Bike Sharing Systems (BSSs) brings environmental and economic benefits to the public. However, BSSs frequently suffer from the imbalanced bike distribution, including dock-less BSSs. The underflow or overflow of bikes in a region may lead to a lower service level to BSSs or congestion to the city. In the paper, we consider rebalancing the dock-less BSS by providing users with monetary incentives. The long-term objective is to maximize the number of satisfied users who successfully complete their rides over a period of time. The operator of the dock-less BSS can not only encourage a user to rent bikes at the neighborhood of its source with a source incentive, but also incentivize them to return bikes at the neighborhood of its destination with a destination incentive. To learn the differentiated incentive price for rebalancing bikes across time and space, we extend a novel deep reinforcement learning framework for user incentive. The source and destination incentives are integrated in an adaptive way by adjusting the detour level at the source and/or destination by avoiding bike underflow and overflow. In the experiment, we evaluate our approach in comparison with two existing pricing schemes. The locations of sources and destinations are abstracted from a selected dataset from Mobike. The experiment results show that our adapted learning algorithm outperforms the original one that only considers source incentive as well as another state-of-the-art approach in maximizing the long-term number of satisfied users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call