Abstract
Over the past few decades, distracted driving has become a common driving habit that threatens the safety of drivers themselves as well as the public. Distracted driving behavior is relatively diverse and complex, and it is difficult to discriminate directly from physiological indicators. On the one hand, traditional distracted driving recognition methods based on eye and facial feature analysis exhibit low recognition accuracy, and the identified behavior types are not enough to form intelligent driving assistance decisions. On the other hand, convolutional neural network (CNN)-based deep learning (DL) methods lack the causal reasoning ability for behavior patterns. Thus, CNN-based recognition methods are easily affected by noise and occlusion situations. In this chapter, we propose a distracted behavior recognition method based on the spatial-temporal biline DL network (STD-DLN) and causal and-or graph (C-AOG). STD-DLN fuses the attention feature extracted from the dynamic optical flow information and the spatial feature of the single video frame to recognize the distracted driving posture. Furthermore, a causal knowledge fence based on C-AOG is fused with STD-DLN to improve the recognition robustness. The C-AOG represents the causality of the behavior change and adopts counterfactual reasoning to suppress behavior recognition failures caused by frame feature distortion or occlusion between body agents. We compared the performance of the proposed method with other state-of-the-art (SOTA) DL methods on two public datasets and one self-collected dataset. Experimental results demonstrate that our method significantly outperforms other SOTA methods when acquiring distracted driving behavior by processing consecutive frames. In addition, our method exhibits accurate continuous recognition and robustness performance under incomplete observation scenarios.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have