The correct wearing of safety helmets and reflective vests is of great significance in construction sites, offices, and civil engineering sites. Aiming to address the issues of low detection accuracy and high algorithm complexity caused by complex background environments in the small target detection of safety helmets and reflective clothing using existing algorithms, an improved algorithm based on YOLOv8n is proposed. Firstly, the SE module is utilized to reduce interference in complex environments. Next, the IOU function is modified to speed up calculations. Then, a lightweight universal upsampling operator (CARAFE) is employed to obtain a larger receptive field. Finally, the Bidirectional Feature Pyramid Network is used to replace the Concat module of the original head layer. Based on these four modifications made to the model, this article names the new model SDCB-YOLO, derived from the initial letters of the four respective modules. The experimental results show that the mAP of the SDCB-YOLO model on the test set reached 97.1%, which is 4.6% higher than YOLOv5s and 3.5% higher than YOLOv8n. Additionally, the model boasts a parameter count of 3,094,304, a computational load of 8.4 GFLOPs, and a model size of 6.13 MB. Compared to YOLOv5s, with a parameter count of 7,030,417, a computational cost of 16.0 GFLOPs, and a model size of 13.79 MB, the SDCB-YOLO model is significantly smaller. When compared to YOLOv8n, with a parameter count of 3,011,628, a computational complexity of 8.2 GFLOPs, and a model size of 6.11 MB, the SDCB-YOLO model’s parameters and model size are only slightly increased, while maintaining a comparable computational load. Therefore, the improved detection algorithm presented in this article not only ensures the lightweight nature of the model but also significantly enhances its detection accuracy.
Read full abstract