Abstract

The challenging task of “intelligent vehicles” opens up a new frontier to enhancing traffic safety. However, how to determine driving behavior timely and effectively is one of the most crucial concerns, which directly affects vehicle's collision avoidance capability and dynamics stability, particularly in emergency scenarios. Here, this paper presents a novel model-based reinforcement learning (RL) solution for driving behavior decision-making of autonomous vehicles in emergency situations. Firstly, in order to generate initial training data, a rule-based expert system (ES) is designed by analyzing human drivers' emergency collision avoidance manipulations and tire dynamics characteristics. Secondly, an imitative learning (IL) algorithm is developed to clone ES's driving behavior through softmax classifier and mini-batch stochastic gradient descent (MSGD) method. Thirdly, A model-prediction-based Q(λ)-learning with function approximation is presented to determine driving policy online, which integrates vehicle system model and neural network model from IL. Finally, the results of both simulation and experiment show that our approach can effectively coordinate multiple motion control systems to improve collision avoidance capability and vehicle dynamics stability at or close to the driving limits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call