Abstract

Labanotation is a widely-used notation system for dance recording. Numerous methods for automatic Labanotation generation from motion capture data have been proposed to save time and human labor. However, dance performance variations, data noises, and difficulties in recognizing long continuous motion sequences limit the performance of existed methods. In this paper, we propose a CRNN-based attention-seq2seq model with fusion features for a robust and effective Labanotation generation.First, we fuse bone feature and Lie group feature to extract not only the information of bones between adjacent joints but also the relative geometry relationships between connected bones. Then, in the proposed seq2seq model, we employ the Convolutional Recurrent Neural Networks (CRNN) to learn the spatial–temporal representation of motion capture data and the attention mechanism to learn good alignments between input motion feature sequences and output symbol sequences. Extensive experiments on real-world datasets show that the proposed method obtains considerable recognition accuracy (90.65% on the LabanSeq16 dataset and 93.29% on the LabanSeq48 dataset), which outperforms state-of-the-art approaches on the task of automatic Labanotation generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call