Abstract

To mitigate risks caused by distracted and fatigued driving, timely and accurate detection of these behaviors is significant for safety concern. Among the state of the art of detection methods based on in-car image analysis, the interaction of multiple distracted and fatigued driving behaviors could make the certain behavior-targeted detection ineffective and unreliable, while the complicated neural-network-based detection methods could be poor in interpretability and feasibility of hardware implementation. This article proposes a novel cooperative detection method for distracted and fatigued driving behaviors, with overall consideration of method performance, operation complexity, and practical hardware implementation. The vital points for detection of hand-held calling, looking left/right continuously, yawning, eye closure, and the cooperative relation among different behaviors are investigated. Experiment cases on the established dataset which involves indoor driving simulation images and actual cockpit driving scenario images of different drivers under various illuminations and background settings are conducted. On the established Di_Fa_C_Tes dataset, the proposed method achieves above 98% detection precision and 37 frames/s processing speed for the tested behaviors by support vector machine (SVM) on a PC platform. Additionally, the article provides a hardware evaluation of the proposed method on i.MX 8QuadMax platform and reaches above 96.8% precision and 16 frames/s processing speed. The experimental results indicate the effectiveness of the proposed method in accurate and fast detection of distracted and fatigued driving behaviors and the promising embedded system application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call