Two major factors contributing to human-induced traffic accidents are driver fatigue and distraction. To reduce the rate of occurrence of car accidents, some studies introduced systems that record biometric measurements, diver behavior, and driver’s physical features while driving. Data obtained from these systems are used to predict the driver concentration states. However, these attention-detection systems face challenges in terms of applicability and accuracy in detecting negatively impacting driving behaviors. Contactless physical feature extraction and integration of deep learning are effective and applicable methods in this context. Therefore, this study proposes a method based on multi-feature processing in conjunction with you only look once (YOLO)-based object detection to classify driver attention. Experiments and validation were conducted using the open-source Yawn detection dataset (YawDD) and National Tsing Hua University drowsy driver detection dataset (NTHU-DDD). The proposed method not only outperforms those of certain studies and achieved high efficiency in detecting multi-emotion features of drivers and assisting driver attention.
Read full abstract