Abstract

Along with the exponential growth of online video creation platforms such as Tik Tok and Instagram, state of the art research involving quick and effective action/gesture recognition applications remains crucial. This work addresses the challenge of classifying such short video clips, using a domain-specific feature design approach, capable of performing significantly well using little training data. The method is based on Gunner Farneback dense optical flow (GF-OF) estimation strategy, Gaussian mixture models, and information divergence. We first aim to obtain accurate 3D representations of the human movements/actions through clustering the results given by GF-OF using K-means method of vector quantization. We then proceed by representing the result of one instance of each action by a Gaussian mixture model. Furthermore, using Kullback–Leibler divergence (KL-divergence), we attempt to find similarities between the trained actions and the ones in the test videos. Classification is done by matching each testing video to the trained action with the highest similarity (lowest KL-divergence). We have performed experiments on the KTH and Weizmann Human Action datasets, and the results reveal the discriminative nature of our proposed methodology in comparison with other state of the art techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call