Abstract

Dynamic gesture recognition is a very active research area in computer vision for the last few decades. Feature selection and extraction is one of the most important phases in gesture recognition since it greatly affects the recognition. In this work it is aimed to focus on some discriminate video features that lead to good recognition of dance gestures considering full-body movement of the dancer. Since human body is a highly articulated structure, it is an important issue to extract features that best describe the articulation. In this paper, a novel full body gesture video dataset is introduced, containing 560 video sequences of 28 ground exercises of Sattriya dance as well as annotations of those sequences and class label of every ground exercise. The purpose of creation of this dataset is to develop a computer vision system to classify each ground exercise. As well as this dataset can be useful for benchmarking a variety of computer vision and machine learning methods designed for dynamic dance gesture recognition.This paper also presents a method for dynamic gesture recognition on the Sattriya dance dataset that we have developed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call