The prevalence of on-street parking searches in urban downtown areas has led to significant externalities such as congestion, pollution, and collisions. Understanding the intricacies of parking search behavior is crucial for developing effective management strategies to mitigate these issues. Parking search is inherently a complex, sequential decision-making process, influenced by diverse driver preferences and dynamic urban environments. This study introduces a deep inverse reinforcement learning (DIRL) approach to model drivers’ parking search behavior. First, we constructed a high-fidelity parking simulation platform using Unity3D to replicate an urban road network, enabling the collection of 987 valid trajectories. We modeled the parking search process as a Markov decision process (MDP), with meticulously designed state-action pairs for accurate representation. Then, a maximum entropy-based DIRL model was developed to learn the reward function and search-for-parking policies of drivers. The experimental results demonstrate that the maximum entropy DIRL model significantly outperforms the traditional maximum entropy inverse reinforcement learning model, achieving a 19.0% improvement in accurately capturing final parking states and a 13.5% enhancement in characterizing overall trajectory distributions. Finally, we integrated these trained models into traditional traffic simulation systems to effectively observe the traffic state evolution with different parking search behaviors, providing valuable insights for optimizing urban traffic management strategies.
Read full abstract