Abstract

In this paper, we propose automatic generation of dance and facial expression using Hidden Markov Model. In the proposed system, first, the acoustic feature for each analysis interval whose length is one bar is extracted from the music given by an user. As acoustic features, Mel-Frequency Cepstrum Coefficients (MFCC) are used. Similar phrases are often repeated in music, and similar dance and expression actions are often assigned to similar phrases. Even in the proposed system, similar dance actions and facial expression actions are assigned to sections where the acoustic features are similar. The dance motion that is the basic form when generating similar dance motions is called a dance vocabulary, and the expression motion that is the basic form when generating similar facial motions is called an expression vocabulary. In the proposed system, the dance motion and the facial expression motion for each analysis interval are classified using the K-means ++ method, and the vocabulary is associated with the classified class labels. Next, a Hidden Markov Model is used to determine a sequence of dance vocabulary from the correspondence between the acoustic feature and the dance vocabulary. Finally, it determines by randomly selecting the corresponding motion for the determined dance vocabulary and facial expression vocabulary, and interpolates, combines and outputs the motion during the analysis interval. Computer experiments were conducted to confirm that automatic generation of dance and facial expression can be performed in the proposed system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call