Abstract

Today, computer vision object detection methods are used for safety inspections from site videos and images. These methods detect bounding boxes and use hand-made rules to enable personal protective equipment compliance checks. This paper presents a new method to improve the breadth and depth of vision-based safety compliance checking by explicitly classifying worker-tool interactions. A detection model is trained on a newly constructed image dataset for construction sites, achieving 52.9% average mean precision for 10 object categories and 89.4% average precision for detecting workers. Using this detector and new dataset, the proposed human-object interaction recognition model achieved 79.78% precision and 77.64% recall for hard hat checking; 79.11% precision and 75.29% recall for safety coloring checking. The new model also verifies hand protection for workers when tools are being used with 66.2% precision and 64.86% recall. The proposed model is superior in these checking tasks when compared with post-processing detected objects with hand-made rules, or applying detected objects only.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call