Abstract
Conversational gestures have a crucial role in realizing natural interactions with virtual agents and robots. Data-driven approaches, such as deep learning and machine learning, are promising in constructing the gesture generation model, which automatically provides the gesture motion for speech or spoken texts. This study experimentally analyzes a deep learning-based gesture generation model from spoken text using a convolutional neural network. The proposed model takes a sequence of spoken words as the input and outputs a sequence of 2D joint coordinates representing the conversational gesture motion. We prepare a dataset consisting of gesture motions and spoken texts by adding text information to an existing dataset and train the models using specific speaker’s data. The quality of the generated gestures is compared with those from an existing speech-to-gesture generation model through a user perceptual study. The subjective evaluation shows that the model performance is comparable or superior to those by the existing speech-to-gesture generation model. In addition, we investigate the importance of data cleansing and loss function selection in the text-to-gesture generation model. We further examine the model transferability between speakers. The experimental results demonstrate successful model transferability of the proposed model. Finally, we show that the text-to-gesture generation model can produce good quality gestures even when using a transformer architecture.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have