Abstract
In conditionally automated driving, the engagement of non-driving activities (NDAs) can be regarded as the main factor that affects the driver’s take-over performance, the investigation of which is of great importance to the design of an intelligent human–machine interface for a safe and smooth control transition. This paper introduces a 3D convolutional neural network-based system to recognize six types of driver behaviour (four types of NDAs and two types of driving activities) through two video feeds based on head and hand movement. Based on the interaction of driver and object, the selected NDAs are divided into active mode and passive mode. The proposed recognition system achieves 85.87% accuracy for the classification of six activities. The impact of NDAs on the perspective of the driver’s situation awareness and take-over quality in terms of both activity type and interaction mode is further investigated. The results show that at a similar level of achieved maximum lateral error, the engagement of NDAs demands more time for drivers to accomplish the control transition, especially for the active mode NDAs engagement, which is more mentally demanding and reduces drivers’ sensitiveness to the driving situation change. Moreover, the haptic feedback torque from the steering wheel could help to reduce the time of the transition process, which can be regarded as a productive assistance system for the take-over process.
Highlights
Conditional automation systems, defined by the SAE (J3016) AutomationLevels [1], releases the driver’s eyes and hands from monitoring the environment and controlling the vehicle
Since the driver’s situation awareness could be reduced and their mental demand could be increased by the non-driving activities (NDAs) engagement [4,5,6], automatically recognising the driver’s NDAs engagement and further understanding its impact on the take-over performance is of great importance to design an intelligent human–machine interface (HMI) for a safe and smooth take-over process
For the NDA detection based on the driver’s head movement (Figure 8a), the precision and recall of both classes are over 95%
Summary
Levels [1], releases the driver’s eyes and hands from monitoring the environment and controlling the vehicle Such systems can perform some non-driving activities (NDAs) during automated driving, they would have to intervene in the control of the vehicle when requested. Two Tesla fatalities occurred in Williston, Florida, USA, 2016 and Mountain View, California, USA, 2018 In both fatalities, the Autopilot systems were engaged, and the drivers were performing some NDAs before and when the accident happened (watching movies and playing games). Neither the Autopilot nor the driver noticed the hazard ahead and took action to avoid the accident, even though there was sufficient time and distance to react to prevent the crash [2,3] Both fatalities could have been avoided if there was a driver monitoring and alert system to prevent the prolonged disengagement of the dynamic driving task. Since the driver’s situation awareness could be reduced and their mental demand could be increased by the NDAs engagement [4,5,6], automatically recognising the driver’s NDAs engagement and further understanding its impact on the take-over performance is of great importance to design an intelligent human–machine interface (HMI) for a safe and smooth take-over process
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.