Abstract
People use a wide range of non-verbal behaviors to signal their intentions in interpersonal relationships. Being echoed by the proven benefits and impact of people's social interaction skills, considerable attention has been paid to generating non-verbal cues for social robots. In particular, communicative gestures help social robots emphasize the thoughts in their speech, describing something or conveying their feelings using bodily movements. This paper introduces a generative framework for producing communicative gestures to better enforce the semantic contents that social robots express. The proposed model is inspired by the Conditional Generative Adversarial Network and built upon a convolutional neural network. The experimental results confirmed that a variety of motions could be generated for expressing input contexts. The framework can produce synthetic actions defined in a high number of upper body joints, allowing social robots to clearly express sophisticated contexts. Indeed, the fully implemented model shows better performance than the one without Action Encoder and Decoder. Finally, the generated motions were transformed into the target robot and combined with the robot's speech, with an expectation of gaining broad social acceptance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.