Abstract

Computer vision techniques can improve the understanding of classical dance gestures and also open up oppurtunities for automatically annotating videos. In this paper, we study Bharatanatyam, an Indian classical dance, which uses hand gestures(mudras), facial expressions, and whole body movements to communicate the intended meaning to the audience. Open datasets on hand gestures of Bharatanatyam are not presently available. An exhaustive Bharatanatyam mudra dataset consisting of 15,396 distinct single hand gesture images of 29 classes and 13,035 distinct double hand gesture images of 21 classes was created for this. This paper deals with performance comparison of machine learning algorithms like Support Vector Machines, Multilayer Perceptron, Decision Tree, and Random Forest based on SIFT and dense SIFT features extracted from the images of the dataset. Dense SIFT descriptors which are 128 dimensional were further dimensionally reduced to 64 and 32 using Principal Component Analysis (PCA) and their performance was also evaluated.KeywordsBharatanatyam mudra datasetDense SIFTFeature descriptorsHand gesturesPrincipal Component AnalysisSIFT

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call