The development of automated vehicles (AVs) will remain in the stage of human–machine co-driving for a long time. Trust is considered as an effective foundation of the interaction between the driver and the automated driving system (ADS). Driver’s trust miscalibration, represented by under-trust and over-trust, is considered to be the potential cause of disuse and misuse of ADS, or even serious accidents. The estimation and calibration of trust are crucial to improve the safety of the driving process. This paper mainly consists of the following two aspects. Firstly, a dynamic and quantitative trust estimation model is established. A framework for trust estimation is constructed. Driver’s perceived risk and behavior features were monitored and a Kalman filter was used to dynamically and quantitatively estimate the driver’s trust. We conducted a driver-in-the-loop experiment and generated model parameters through a data-driven approach. The results demonstrated that the model exhibited precision in trust estimation, with the highest accuracy reaching 74.1%. Secondly, a reminder strategy to calibrate the over-trust of the driver is proposed based on the model from the first part. A scenario with four risky events was designed and the ADS would provide voice reminders to the driver when over-trust was detected. The results demonstrated that the reminder strategy proved to be beneficial for safety enhancement and moderate trust maintenance during the driving process. When the driver is over-trusting, the accident rates of the reminder group and the non-reminder group were 60.6% and 13.0%, respectively. Our contribution in this paper can be concluded by four points: (1) A real-time trust estimation model is proposed, which is dynamic and quantitative, considering the evolution pattern of driver’s trust and the perceived risk; (2) Mathematical modeling and machine learning methods are combined; (3) A trust-based reminder strategy that aims to enhance the safety of human–machine co-driving is designed; (4) Driver-in-loop experiment validates the effectiveness in enhancing the safety, maintaining driver’s trust and reducing trust biases in human–machine co-driving.
Read full abstract