Abstract

The advent of connected vehicles (CVs) provides new opportunities to address urban parking issues due to the widespread application of online parking assignment (OPA) services. However, before CVs fully replace non-connected vehicles (NCVs), it is envisioned that CVs and NCVs will coexist for a long time. This brings challenges for OPA because of the availability constraints imposed by the uncertain arrivals and departures of NCVs. This paper proposes a multi-agent deep reinforcement learning framework to generate efficient OPA strategies with partial observations of parking demand. Specifically, we create two agents, one for measuring the impact of NCVs and the other for exploring the parking characteristics of CVs. A value decomposition method is adopted to solve the multi-agent learning problem, and a modified exploration strategy is designed to direct agent training and avoid unnecessary trials. To verify the performance of the proposed approach, we derive the baselines of the total time expenditure in a parking area based on the widely adopted first-come-first-served strategy and a hypothetical system optimum strategy, respectively. Also, we present a dynamic assignment model with forecasting as a comparison of the proposed approach with the same demand information. Two typical parking scenarios are selected to conduct comparative experiments with actual operating data. The experimental results show that the proposed learning-based approach can effectively allocate parking resources. Provided with user parking information of CVs short in advance, our approach can achieve up to 15% improvement in assignment performance compared with other baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call