Abstract

During the transition period, the interactions between human-driven vehicles (HVs) and autonomous vehicles (AVs), especially the car-following behaviors, need to be analyzed comprehensively to provide feedbacks to AV controllers, increase the inference ability of AVs and reflect the social acceptance of AVs. Previous studies have found that HVs behave differently when following AVs compared to when following HVs through traffic/numerical simulations or field experiments. However, these works have critical drawbacks such as simplified driving environments and limited sample sizes. The objective of this study is to realistically model and understand HV-following-AV dynamics and their microscopic interactions. An inverse reinforcement learning model (Inverse soft-Q Learning) has been implemented to retrieve HVs' reward functions in HV-following-AV events. Then a deep reinforcement learning (DRL) approach ― soft actor-critic (SAC) is adopted to estimate the optimal policy for HVs following AVs. HV-following-AV events from the high-resolution (10 Hz) Waymo Open Dataset are extracted to validate the proposed model. The results show that compared with other conventional and data-driven car-following models, the proposed model leads to significantly more accurate trajectory predictions and gains more insights into HVs' car-following behaviors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call