Abstract

This paper presents a robot teaching system based on hand-robot contact state detection and human motion intent recognition. The system can detect the contact state of the hand-robot joint and extracts motion intention information from the human surface electromyography (sEMG) signals to control the robot's motion. First, a hand-robot contact state detection method is proposed based on the fusion of the virtual robot environment with the physical environment. With the use of a target detection algorithm, the position of the human hand in the color image of the physical environment can be identified and its pixel coordinates can be calculated. Meanwhile, the synthetic images of the virtual robot environment are combined with those of the physical robot scene to determine whether the human hand is in contact with the robot. Besides, a human motion intention recognition model based on deep learning is designed to recognize human motion intention with the input of sEMG signals. Moreover, a robot motion mode selection module is built to control the robot for single-axis motion, linear motion, or repositioning motion by combining the hand-robot contact state and human motion intention. The experimental results indicate that the proposed system can perform online robot teaching for the three motion modes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.