Abstract

The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver’s vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers’ car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models.

Highlights

  • Recent studies have suggested that the development of autonomous driving may benefit from imitating human drivers [1,2,3]. ere are two reasons: First, the comfort of autonomous vehicles (AVs) may be improved if the driving styles match the preferences of the passengers

  • We propose a car-following model based on Max-Ent deep IRL (DIRL). e proposed model learns the rewards of drivers during car-following which were approximated by an neural networks (NNs). e policy of drivers was solved by an reinforcement learning (RL) algorithm of softmax version of value iteration

  • Tested on actual driving data, the results showed that the proposed model outperformed the behavior cloning (BC) models NN and RNN by providing the lowest root mean square percentage error (RMSPE) and MHD50 in replicating drivers’ car-following trajectories. e better performance of the proposed model can be explained by the more general objective compared with the BC models. e DIRL model reproduces drivers’ policy by firstly learning drivers’ decision-making mechanisms, whereas the BC approaches only learn the state-action relationships

Read more

Summary

Introduction

Recent studies have suggested that the development of autonomous driving may benefit from imitating human drivers [1,2,3]. ere are two reasons: First, the comfort of autonomous vehicles (AVs) may be improved if the driving styles match the preferences of the passengers. E modeling of car-following behavior has been a common research focus in the fields of traffic simulation [4], advanced driver-assistance system (ADAS) design [5], and connected driving and autonomous driving [6,7,8,9]. With the rapid development of data science, data-driven methods with a focus on learning the behavior of drivers based on field data [13, 14] have emerged. For both approaches, data-driven car-following models were found to provide the highest accuracy and best generalization ability for replicating the drivers’ trajectories

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.