Driver inattention has emerged as a critical concern impacting road safety, resulting in an alarming surge in accidents and fatalities. This research introduces a novel system for detecting inattention, structured across six levels: perception, facial feature extraction, tracking driver face, and driver secondary task using pre-trained deep learning models, inattention detection, risk estimation, and alert. The system is based on image processing captured from two strategically positioned cameras that simultaneously capture the driver’s activities while driving and their facial expressions. The second contribution concerns the driver facial features extraction using multi-task cascaded convolutional networks (MTCNN), and it is comparison with the histogram of gradient (HOG)-based frontal face detector, and haar feature based cascade classifier. The algorithms were compared based on their runtime efficiency, robustness in handling varying lighting conditions, and various head movements. The MTCNN achieves high performance, reaching accuracy levels ranging from 96.4% to 99.5% on two datasets including realistic driving scenarios: the DrivFace dataset and, the driver drowsiness dataset. The comparative analysis sheds light on the strengths and weaknesses of each algorithm, providing valuable insights for selecting the most suitable face detection algorithm to use in our system.
Read full abstract