Abstract

With a focus on fatigue driving detection research, a fully automated driver fatigue status detection algorithm using driving images is proposed. In the proposed algorithm, the multitask cascaded convolutional network (MTCNN) architecture is employed in face detection and feature point location, and the region of interest (ROI) is extracted using feature points. A convolutional neural network, named EM-CNN, is proposed to detect the states of the eyes and mouth from the ROI images. The percentage of eyelid closure over the pupil over time (PERCLOS) and mouth opening degree (POM) are two parameters used for fatigue detection. Experimental results demonstrate that the proposed EM-CNN can efficiently detect driver fatigue status using driving images. The proposed algorithm EM-CNN outperforms other CNN-based methods, i.e., AlexNet, VGG-16, GoogLeNet, and ResNet50, showing accuracy and sensitivity rates of 93.623% and 93.643%, respectively.

Highlights

  • A survey by the American Automobile Association’s Traffic Safety Foundation found that 16–21% of traffic accidents were caused by driver fatigue [1]

  • In machine vision-based fatigue driving detection, blink frequency, and yawning are important indicators for judging driver fatigue. erefore, this paper proposed a convolutional neural network that recognizes the state of the eyes and mouth to determine whether the eyes and mouths are open or closed. e EM-convolutional neural networks (CNNs) can reduce the influence of factors such as changes in lighting, sitting, and occlusion of glasses to meet the adaptability to complex environments

  • Face detection and feature point location are performed based on multitask cascaded convolutional network (MTCNN), and the state of eyes and mouth is determined by EM-CNN

Read more

Summary

Introduction

A survey by the American Automobile Association’s Traffic Safety Foundation found that 16–21% of traffic accidents were caused by driver fatigue [1]. (2) Based on the driving state of the vehicle [8, 9], Ramesh et al [10] used a sensor to detect the movement state of the steering wheel in real time to determine the degree of driver fatigue. Computational Intelligence and Neuroscience shape [14] An advantage of this method is that the facial features are noninvasive visual information that is unaffected by other external factors, i.e., driving state of the vehicle, individual driving characteristics, and road environment. (4) Based on information fusion, Wang Fei et al combined physiological indicators and driving state of the vehicle to detect the driver fatigue state of the driver by collecting the EEG signal of the subject and the corresponding steering wheel manipulation data. The robustness of the test is affected by the individual’s manipulation habits and the driving environment

Related Work
Proposed Methodology
State of the Eye and Mouth Recognition
Experimental Results

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.