Abstract
Due to the advancement of science and technology, modern cars are highly technical, more activity occurs inside the car and driving is faster; however, statistics show that the number of road fatalities have increased in recent years because of drivers’ unsafe behaviors. Therefore, to make the traffic environment safe it is important to keep the driver alert and awake both in human and autonomous driving cars. A driver’s cognitive load is considered a good indication of alertness, but determining cognitive load is challenging and the acceptance of wire sensor solutions are not preferred in real-world driving scenarios. The recent development of a non-contact approach through image processing and decreasing hardware prices enables new solutions and there are several interesting features related to the driver’s eyes that are currently explored in research. This paper presents a vision-based method to extract useful parameters from a driver’s eye movement signals and manual feature extraction based on domain knowledge, as well as automatic feature extraction using deep learning architectures. Five machine learning models and three deep learning architectures are developed to classify a driver’s cognitive load. The results show that the highest classification accuracy achieved is 92% by the support vector machine model with linear kernel function and 91% by the convolutional neural networks model. This non-contact technology can be a potential contributor in advanced driver assistive systems.
Highlights
Today’s vehicle system is more advanced, faster and safer than before and is on the process to be fully autonomous
The aim and objective of these experiments are to observe the performance of the camera system compare to the commercial Eye-Tracking system in terms of raw signal comparisons, extracted features comparisons and drivers’ cognitive load classification
The experimental works in this study are four-fold: (1) comparison between raw signals extracted both by the camera system and by the commercial eyeT system, (2) Selection of Optimal Sampling Frequency, i.e., identification of the sampling frequency that is best for feature extraction and classification, (3) comparisons between the extracted features based on the camera system and the eyeT system and, (4) cognitive load classification and comparisons between the camera system and the eyeT system
Summary
Today’s vehicle system is more advanced, faster and safer than before and is on the process to be fully autonomous. Literature shows that most traffic accidents happen by human error [1]. According to National Highway Traffic Safety Administration (NHTSA), about 94% of all observed accidents occurred in 2018 due to the presence of human error [4] such as higher stress [5], tiredness [6], drowsiness [7,8] or higher cognitive load [9]. A report published in 2015 shows that almost 38% of the total road accidents happen due to the driver’s mental distraction [10], which increases cognitive load of the driver. Another driver status called fatigue is the gradually increasing subjective feeling of tiredness of a subject under load. Fatigue can have physical or mental causes and can be manifested in a number of different ways [11]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.