Abstract
AbstractIt is important for construction personnel to observe the dress code, such as the correct wearing of safety helmets and reflective vests is conducive to protecting the workers' lives and safety of construction. A YOLO network‐based detection algorithm is proposed for the construction personnel dress code (YOLO‐CPDC). Firstly, Multi‐Head Self‐Attention (MHSA) is introduced into the backbone network to build a hybrid backbone, called Convolution MHSA Network (CMNet). The CMNet gives the model a global field of view and enhances the detection capability of the model for small and obscured targets. Secondly, an efficient and lightweight convolution module is designed. It is named Ghost Shuffle Attention‐Conv‐BN‐SiLU (GSA‐CBS) and is used in the neck network. The GSANeck network reduces the model size without affecting the performance. Finally, the SIoU is used in the loss function and Soft NMS is used for post‐processing. Experimental results on the self‐constructed dataset show that YOLO‐CPDC algorithm has higher detection accuracy than current methods. YOLO‐CPDC achieves a mAP50 of 93.6%. Compared with the YOLOv5s, the number of parameters of our model is reduced by 18% and the mAP50 is improved by 1.1%. Overall, this research effectively meets the actual demand of dress code detection in construction scenes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.