Abstract

The implementation of robots in the construction industry is poised to increase and improve. Human-robot collaboration (HRC) leverages dexterous workers and tireless robots to execute various complicated construction operations. However, physical aspects of HRC may create timing and collision hazards at construction sites. One of the main issues is that a construction robot cannot understand a worker’s moving intention. In this regard, the worker’s posture is an essential indicator for avoiding collisions. Therefore, to allow workers and robots to collaborate safely, robots need to be empowered to assess workers’ postures and movement intentions. To address this need, this study leverages computer vision techniques to enable collaborative robots to estimate worker positions and poses. The proposed approach employs a multi-stage Convolutional Neural Network to first identify workers’ joints. Subsequently, the network will assemble the results into full-body postures, using the Part Affinity Fields technique, to allow the robot to understand worker poses. To examine the feasibility of this approach, an experiment was designed in which four subjects were required to perform bricklaying tasks in collaboration with a masonry robot. The results reveal that the proposed approach enables robots to estimate subjects’ postures with a 63.3% precision using a metric of the percentage of correct key points. The findings pave the way to enable collaborative robots to understand workers’ intentions when moving, supporting safe HRC at construction sites.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call