Abstract

The rapid development of transportation industry has brought some potential safety hazards. Aiming at the problem of driving safety, the application of artificial intelligence technology in safe driving behavior recognition can effectively reduce the accident rate and economic losses. Based on the presence of interference signals such as spatiotemporal background mixed signals in the driving monitoring video sequence, the recognition accuracy of small targets such as human eyes is low. In this paper, an improved dual-stream convolutional network is proposed to recognize the safe driving behavior. Based on convolutional neural networks (CNNs), attention mechanism (AM) is integrated into a long short-term memory (LSTM) neural network structure, and the hybrid dual-stream AM-LSTM convolutional network channel is designed. The spatial stream channel uses the CNN method to extract the spatial characteristic value of video image and uses pyramid pooling instead of traditional pooling, normalizing the scale transformation. The time stream channel uses a single-shot multibox detector (SSD) algorithm to calculate the adjacent two frames of video sequence for the detection of small objects such as face and eyes. Then, AM-LSTM is used to fuse and classify dual-stream information. The self-built driving behavior video image set is built. ROC, accuracy rate, and loss function experiments are carried out in the FDDB database, VOT100 data set, and self-built video image set, respectively. Compared with CNN, SSD, IDT, and dual-stream recognition methods, the accuracy rate of this method can be improved by at least 1.4%, and the average absolute error in four video sequences can be improved by more than 2%. On the contrary, in the self-built image set, the recognition rate of doze reaches 68.3%, which is higher than other methods. The experimental results show that this method has good recognition accuracy and practical application value.

Highlights

  • China’s manufacturing industry has entered a period of rapid growth, along with the logistics and transportation industry rising

  • Research shows that traffic accidents are mainly caused by people, cars, roads, and environmental factors, among which fatigue driving and unsafe behaviors are the main causes of traffic accidents [1], accounting for 69% of traffic accidents

  • An improved hybrid dual-channel convolutional neural networks (CNNs) network is proposed for driving safety behavior recognition by using artificial intelligence technology and deep learning theory. is algorithm is based on the fusion of CNN and long short-term memory (LSTM) network structures and integrating attention mechanism

Read more

Summary

Introduction

China’s manufacturing industry has entered a period of rapid growth, along with the logistics and transportation industry rising. With the improvement in people’s living standards, cars have become the main means of transportation and caused increasingly busy transportation and increasing traffic accidents, which cause people’s lives, production, and property losses. Research shows that traffic accidents are mainly caused by people, cars, roads, and environmental factors, among which fatigue driving and unsafe behaviors are the main causes of traffic accidents [1], accounting for 69% of traffic accidents. Long-time fatigue driving is easy to cause traffic accidents [2]. Unsafe behaviors mainly include illegal actions, calling, smoking, inattention, eating, and fatigue driving [3, 4]. Erefore, how to use modern scientific and technological means to reduce traffic accidents and losses is worthy of study for ensuring life safety Unsafe behaviors mainly include illegal actions, calling, smoking, inattention, eating, and fatigue driving [3, 4]. erefore, how to use modern scientific and technological means to reduce traffic accidents and losses is worthy of study for ensuring life safety

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.