Abstract

Fatigue negatively affects the safety and performance of drivers on the road. In fact, drowsiness and fatigue are the cause of a substantial number of motor vehicle accidents. Drowsiness among the drivers can be detected using variety of modalities, including electroencephalogram (EEG), eye movement, and vehicle driving dynamics. Among these EEG is highly accurate but very intrusive and cumbersome. On the other hand, vehicle driving dynamics are very easy to acquire but accuracy is not very high. Eye movement based approach is very attractive in terms of balance between these two extremes. However, eye movement based techniques normally require an eye tracking device which consists of high speed camera with sophisticated algorithm to extract eye movement related parameters such as blinking, eye closure, saccades, fixation etc. This makes eye tracking based drowsiness detection difficult to implement as a practical system, especially on an embedded platform.In this paper, authors propose to use eye images from camera directly without the need for expensive eye-tracking system. Here, eye related movements are captured by Recurrent Neural Network (RNN) to detect the drowsiness. Long Short Term Memory (LSTM) is a class of RNN which has several advantages over vanilla RNNs. In this work an array of LSTM cells are utilized to model the eye movements. Two types of LSTMs were employed: 1-D LSTM (R-LSTM) which is used as baseline and the convolutional LSTM (C-LSTM) which facilitates using 2-D images directly. Patches of size 48 × 48 around each eye were extracted from 38 subjects, participating in a simulated driving experiment. The state of vigilance among the subjects were independently assessed by power spectral analysis of multichannel electroencephalogram (EEG) signals, recorded simultaneously, and binary labels of alert and drowsy (baseline) were generated.Results show high efficacy of the proposed system. R-LSTM based approach resulted in accuracy around 82 % and C-LSTM based approach resulted in accuracy in the range of 95%–97%. Comparison is also provided with a recently published eye-tracking based approach, showing the proposed LSTM technique outperform with a wide margin.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.