Abstract
Among the causes of the annually traffic accidents, driving fatigue is the main culprit. In consequence, it is of great practical significance to carry out the research of driving fatigue detection and early warning system. However, there are still two problems in the latest methods of driving fatigue detection: one is that a single information cannot precisely reflect the actual state of the driver in different fatigue phases, another one is the detection effect is not very well or even difficult to detect under abnormal illumination. In this paper, the multi-task cascaded convolutional networks (MTCNN) and infrared-based remote photo-plethysmography (rPPG) theory are used to extract the driver’s facial and physiological information, and the multi-modal specific fatigue information is deeply excavated, and the multi-modal feature fusion model is constructed to comprehensively analyze the driver’s fatigue variation tendency. Aiming at the matter of low detection accuracy under abnormal illumination, the multi-modal features extracted from visible light images and infrared images are fused by multi-loss reconstruction (MLR) module, and the driving fatigue detection module is established which is based on Bi-LSTM model by utilizing fatigue timing. The experiments were validated under all-weather illumination scenarios and were carried out on the datasets NTHU-DDD, UTA-RLDDD and FAHD. The results show that the multi-modal driving fatigue detection model has better performance than the single-modal model, and the accuracy is improved by 8.1%. In the abnormal illumination such as strong and weak light, the accuracy of the method can reach 91.7% at the highest and 83.6% at the lowest. Meanwhile, in the normal illumination, it can reach 93.2%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.