Abstract

AbstractRecently, the helmet wear detection method based on computer vision has become an important means of construction units to implement management. Improving the detection accuracy and detection speed of helmet wear identification is the critical challenge for applications. On the construction sites, the camera’s location is high and far from workers. As detection targets in the small and medium range, workers and helmets require a smaller sensory field for detection. This paper aims to improve the network structure of YOLOv5 by adding a set of convolutional layers to the backbone network and performing feature fusion with shallow residual network. In each branch, helmet detection is performed according to the front and rear semantic information to form a deep fusion fast safety helmet detection model. In terms of loss function, this paper also makes improvements accordingly, replacing GIoU Loss with the better CIoU Loss, and using the improved model algorithm to detect the state of workers wearing helmets, the accuracy can reach 91.6%, and the mAP reaches 93.2%.KeywordsHelmet detectionYOLOv5Small target detection

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.