Abstract

Identifying k ey frames is the first and necessary step before solving the variety of other B haratanatyam problems. The paper aims to partition the momentarily stationary frames ( key frame s) from this dance video’s motion frames. The proposed key frame s (KFs) localization is novel, simple, and effective compared to the existing dance video analysis methods. It is distinctive from standard KFs detection algorithms as used in other human motion videos. In the dance’s basic structure, the occurrence of KFs during performances is often not completely stationary and varies with the dance form and the performer. Hence, it is not easy to decide a global threshold (on the quantum of motion) to work across dancers and performances. The earlier approaches try to compute the threshold iteratively. However, the novelty of the paper is: (a) formulating an adaptive threshold, (b) adopting Machine Learning (ML) approach and, (c) generating the effective feature by combining three frame differencing and bit-plane technique for the KF detection. In ML, we use Support Vector Machine (SVM) and Convolutional Neural Network (CNN) as the classifiers. The proposed approaches are also compared and analyzed with the earlier approaches. Finally, the proposed ML techniques emerge as a winner with around 90% accuracy.

Highlights

  • Bharatanatyam1 is mostly practiced oldest Indian Classical Dance (ICD) form

  • We provide the unlabeled feature sets to our trained Support Vector Machine (SVM) model, and the model predicts whether a given feature set belongs to key frames (KFs) or motion frames (MFs)

  • The proposed methods are able to extract the key frames in Bharatanatyam dance videos successfully

Read more

Summary

Introduction

Bharatanatyam is mostly practiced oldest Indian Classical Dance (ICD) form Using this dance form, the dancer illustrates the Hindu religion themes and spiritual ideas with elegant footwork, impressive body postures, emotional facial expression, and hand gestures. All these well-defined gestures, postures (Key postures), movements (motions), and transitions are the units of an Adavu. The dancer follows the rhythmic beats (Tal) in audio to perform the Adavu. Each posture/motion is driven by an audio beat shown in the figure. 1) THREE FRAME DIFFERENCING AND BIT-PLANE EXTRACTION The relative change can be detected by temporal differencing in the successive frames.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call