Abstract

The goal of the system presented in this paper is to develop a natural talking gesture generation behavior for a humanoid robot. With that aim, human talking gestures are recorded by a human pose detector and the motion data captured is afterwards used to feed a Generative Adversarial Network (GAN). The motion capture system is capable of properly estimating the limbs/joints involved in human expressive talking behavior without any kind of wearable. Tested in a Pepper robot, the developed system is able to generate natural gestures without becoming repetitive in large talking periods. The approach is compared with a previous work, in order to evaluate the improvements introduced by a computationally more demanding approach. This comparison is made by calculating the end effectors’ trajectories in terms of jerk and path lengths. Results show that the described system is able to learn natural gestures just by observation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call