Abstract
This paper studies a lightweight construction safety behavior detection model based on improved YOLOv8, aiming to improve the detection accuracy of safety behaviors on construction sites and achieve lightweight models. YOLO (You Only Look Once) is an object detection algorithm that can achieve real-time and efficient object detection by dividing images into grids and predicting the bounding boxes and categories of objects in each grid. Traditional YOLO models often have problems of missed detection and insufficient feature processing when dealing with complex scenes, especially when facing large-scale data sets. In order to solve this problem, this paper improves on the basis of YOLOv8 and uses a lighter Mobilenetv3 as the backbone network to replace the original CSPDARKNET53 to reduce the amount of calculation and improve the processing speed. At the same time, the receptive field is expanded by combining the Receptive Field Block (RFB) module, the ability to capture multi-scale features is enhanced, and the Global Attention Mechanism (GAM)-Attention mechanism is introduced to enhance the recognition ability of local features. Through experimental results, the improved YOLOv8 model performed excellently in detecting five common unsafe behaviors of construction workers, with an mAP of 0.86, a precision of 0.84, a recall rate of 0.87, an F1 value of 0.85, and an IoU of 0.8, which are significantly better than traditional methods. This shows that the model has successfully achieved the goal of lightweight while improving detection accuracy, and has broad application prospects.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have