Abstract

Abstract. In human–robot collaborative sewing, the robot follows the sewing action of a worker to complete the corresponding sewing action, which can enhance production efficiency. When the robot follows the sewing action of the worker through interactive information, it still faces the problem of low accuracy. In order to improve the accuracy of the robot following the sewing action, a human upper-limb sewing-action-following system based on visual information is designed in this paper. The system is composed of an improved OpenPose model, Gaussian mixture model (GMM), and Gaussian mixture regression (GMR). In the system, an improved OpenPose model is used to identify the sewing action of the human upper limb, and the label fusion method is used to correct the joint point labels when the upper limb is covered by fabric. Then the GMM is used to encode each motion element and time to obtain the regression work of the Gaussian component. GMR is adopted to predict connections between moving elements and generate sewing motion trajectories. Finally, the experimental verification and simulation are carried out in the experimental platform and simulation environment of the collaborative robot. The experimental results show that the tracking error angle can be controlled within 0.04 rad in the first 2 s of robot movement. Therefore, it can be considered that the sewing-action-following system can realize higher precision and promote the development of human–robot collaboration technology to a certain extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call