Abstract

In the field of construction, human-robot collaboration and mixed reality (MR) open new possibilities. However, safety and reliability issues persist. The lack of flexibility and adaptability in current preprogrammed systems hampers real-time human-robot collaboration. A key gap in this area lies in the ability of the robot to interpret and accurately execute operations based on the real-time visual instructions and restrictions provided by the human collaborator and the working environment. This paper focuses on an MR-based human-robot collaboration method through visual feedback from a vision-based collaborative industrial robot system for use in wood stereotomy which we are developing. This method is applied to an alternating workflow in which a skilled carpenter lays out the joinery on the workpiece, and the robot cuts it. Cutting operations are instructed to the robot only through lines and conventional “carpenter’s marks”, which are drawn on the timbers by the carpenter. The robot system’s accuracy in locating and interpreting marks as cutting operations is evaluated by automatically constructing a 3D model of the cut shape from the vision system data. A digital twin of the robot allows the carpenter to previsualize all motions that are required by the robot for task validation and to know when to enter the collaborative workspace. Our experimental results offer some insights into human-robot communication requirements for collaborative robot system applications in timber frame construction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call