Abstract

AbstractIt is important for construction personnel to observe the dress code, such as the correct wearing of safety helmets and reflective vests is conducive to protecting the workers' lives and safety of construction. A YOLO network‐based detection algorithm is proposed for the construction personnel dress code (YOLO‐CPDC). Firstly, Multi‐Head Self‐Attention (MHSA) is introduced into the backbone network to build a hybrid backbone, called Convolution MHSA Network (CMNet). The CMNet gives the model a global field of view and enhances the detection capability of the model for small and obscured targets. Secondly, an efficient and lightweight convolution module is designed. It is named Ghost Shuffle Attention‐Conv‐BN‐SiLU (GSA‐CBS) and is used in the neck network. The GSANeck network reduces the model size without affecting the performance. Finally, the SIoU is used in the loss function and Soft NMS is used for post‐processing. Experimental results on the self‐constructed dataset show that YOLO‐CPDC algorithm has higher detection accuracy than current methods. YOLO‐CPDC achieves a mAP50 of 93.6%. Compared with the YOLOv5s, the number of parameters of our model is reduced by 18% and the mAP50 is improved by 1.1%. Overall, this research effectively meets the actual demand of dress code detection in construction scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call