Abstract

Driving stress is the demand for reserved cognitive space after a driver perceives changes in vehicle, road and environmental factors during driving, which has been proven to affect driving behaviour, interfering with driving safety. Traditional stress prediction relies extensively on psychological data and is limited by the unpopularity of psychological data collection technology, which cannot be applied in daily life on a large scale. In recent years, advances in high-precision visual analysis technology represented by deep learning have laid the foundation for automated and large-scale visual environment analysis. This study proposes a framework for the quantitative analysis of highway driving stress based on multiple vehicle, road, and environmental factors. A dilated residual network model and other methods were used to extract visual environmental indexes. Combined with multisource data such as traffic volume and road design parameters, the LightGBM method was used to construct an expressway driving stress prediction model with high accuracy. The MAE, RMSE and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$R^{2}$ </tex-math></inline-formula> values of the proposed model are 0.042, 0.004 and 0.881, respectively, demonstrating the usefulness for scaled and efficient assessment of expressway stress loads. The SHAP method was used to explore the relationship between different influencing factors and driving stress to quantify the mechanism of vehicle, road and environment influences on stress load, and to propose recommendations for highway design and planning from the perspective of reducing stress load. This study provides a new way of thinking to quantitatively investigate the link between multiple road traffic factors and driving stress, providing efficient and large-scale assessment of expressway driving stress, as well as proposing some suggestions for highway design and planning to enhance stress reduction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.