Abstract
Distracted driving has been recognized as a major challenge to traffic safety improvement. This article presents a novel driving distraction detection method that is based on a new deep network. Unlike traditional methods, the proposed method uses both temporal information and spatial information of electroencephalography (EEG) signals as model inputs. Convolutional techniques and gated recurrent units were adopted to map the relationship between drivers’ distraction status and EEG signals in the time domain. A driving simulation experiment was conducted to examine the effectiveness of the proposed method. Twenty-four healthy volunteers participated and three types of secondary tasks (i.e., cellphone operation task, clock task, and 2-back task) were used to induce distraction during driving. Drivers’ EEG responses were measured using a 32-channel electrode cap, and the EEG signals were preprocessed to remove artifacts and then split into short EEG sequences. The proposed deep-network-based distraction detection method was trained and tested on the collected EEG data. To evaluate its effectiveness, it was also compared with the networks using temporal or spatial information alone. The results showed that our proposed distraction detection method achieved an overall binary (distraction versus nondistraction) classification accuracy of 0.92. In terms of task-specific distraction detection, its accuracy was 0.88. Further analysis on the individual difference in detection performance showed that drivers’ EEG performance differed across individuals, which suggests that adaptive learning for each individual driver would be needed when developing in-vehicle distraction detection applications. Note to Practitioners—Driver distraction detection is crucial for safety enhancement to avoid crashes caused by nondriving-related activities, such as calling and texting while driving. Related previous studies mainly focus on detection by monitoring head and eye movement using computer vision technologies or by extracting indicators from driving performance measures for driver state inference. However, complex traffic environments (e.g., dynamically changing light distribution on driver’s face and nighttime driving with low illumination) strongly limit the effectiveness of computer vision technologies, and the driving performance characteristics may also be caused by factors other than distraction (e.g., fatigue). To solve these problems, this article seeks to develop a deep learning-based approach to map the unique relationship between driver distraction and the bioelectric electroencephalography (EEG) signals that are not affected by traffic environments. The proposed method can be integrated into the driver assistance systems and autonomous vehicles to deal with emergency situations that need drivers to handle. The timely detection of distraction by our method will significantly facilitate its practical applications in collision avoidance or danger mitigation in the handover process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Automation Science and Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.