Abstract

Driver distraction behavior is prone to induce traffic accidents. Therefore, it is necessary to detect them to caution drivers in time for traffic safety. In driver behavior recognition, the variety of behaviors and the diversity of the driving environment can have a certain effect on detection accuracy, and the information loss is severe in most existing methods. These make it challenging to improve the real-time accuracy of driver distraction behavior. In this paper, we propose an improved YOLOv7 based on the channel expansion and attention mechanism for driver distraction behavior detection, named CEAM-YOLOv7. The global attention mechanism (GAM) module focuses on the key information to improve accuracy. With the insertion of GAM into the Backbone and Head of YOLOv7, the global dimensional interaction features are scaled up to enable the network to extract key features. Furthermore, In the CEAM-YOLOv7 architecture, the convolution computation has been significantly simplified, which is conducive to increasing the detection speed. Combined with the Inversion and contrast limited adaptive histogram equalization (CLAHE) image enhancement algorithm, a channel expansion (CE) algorithm for data augmentation is presented to further optimize the detection effect of infrared (IR) images. On the driver distraction IR dataset of Hunan University of Science and Technology (HNUST) and Hunan University (HNU), the verification results show that CEAM-YOLOv7 achieved 20.26% higher <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">mAP</i> compared to the original YOLOv7 model, and the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">FPS</i> reaches 156, which illustrate that CEAM-YOLOv7 outperforms state-of-the-art methods in both accuracy and speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call