The development of automated driving vehicles aims to provide safer, comfortable, and more efficient mobility options. However, the decision-making control of autonomous vehicles still embraces limitations on human performance mimicry. These limitations become particularly evident in complex and unfamiliar driving scenarios, where weak decisionmaking abilities and poor adaptation of vehicle behaviour are prominent issues. This paper proposes a game-theoretic decisionmaking algorithm for human-like driving in the vehicle lane change scenario. Firstly, an inverse reinforcement learning (IRL) model is used to quantitatively analyse the lane change trajectories of the natural driving dataset, establishing the human-like human cost function. Subsequently, joint safety, comfort to build the comprehensive decision cost function. Use the combined decision cost function to conduct a non-cooperative game of vehicle lane changing decision to solve the optimal decision of host vehicle lane changing. The host vehicle lane-changing decision problem is formulated as a Stackelberg game optimization problem. To verify the feasibility and effectiveness of the algorithm proposed in this study, a lane change test scenario has been established. Firstly, we analyse the human-like decision-making model derived by the maximum entropy inverse reinforcement learning algorithm to verify the effectiveness and robustness of the IRL algorithm. Secondly, the human-like game decisionmaking algorithm in this paper is validated by conducting an interactive lane-changing experiment with obstacle vehicles of different driving styles. The experimental results prove that the human-like driving decision-making model proposed in this study can make lane-changing behaviours in line with human driving patterns in lane-changing scenarios of expressway.
Read full abstract