Abstract

With the development of Industry 4.0, robots tend to be intelligent and collaborative. For one, robots can interact naturally with humans. For another, robots can work collaboratively with humans in a common area. The traditional teaching method is no longer suitable for the production mode with human–robot collaboration. Since the traditional teaching processes are complicated, they need highly skilled staffs. This paper focuses on the natural way of online teaching, which can be applied to the tasks such as welding, painting, and stamping. This paper presents an online teaching method with the fusion of speech and gesture. A depth camera (Kinect) and an inertial measurement unit are used to capture the speech and gesture of the human. Interval Kalman filter and improved particle filter are employed to estimate the gesture. To integrate speech and gesture information more deeply, a novel method of audio-visual fusion based on text is proposed, which can extract the most useful information from the speech and gestures by transforming them into text. Finally, a maximum entropy algorithm is employed to deal with the fusion text into the corresponding robot instructions. The practicality and effectiveness of the proposed approach were validated by five subjects without robot teaching skills. The results indicate that the online robot teaching system can successfully teach robot manipulators.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.