Abstract

Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively.

Highlights

  • During the last decade, human action recognition (HAR) has become a well-developing research area and has received substantial attention

  • The proposed model was evaluated on four benchmark datasets for human action recognition, that is, KTH, UCF Sport, UCF11 and UCF50

  • KTH [21] contains actions performed in simple scenarios and as a consequence it is not considered as challenging by state-of-the-art human action recognition methods, it is included for comparison with related work

Read more

Summary

Introduction

Human action recognition (HAR) has become a well-developing research area and has received substantial attention. Using the density of a spatio-temporal cube of size n, each neighboring STIP in the ST-cube is paired with a visual word to obtain n different visual expressions This provides a local representation of visual expressions and enables some tolerance to viewpoint variation and occlusion. In contrast to Reference [7], visual expressions are constructed to incorporate the spatio-temporal contextual information of the visual words by combining the visual words and the neighboring STIPs present in each spatio-temporal cube These visual expression representations discard all information related to other visual words and only consider the relationship between a visual word and its neighboring STIPs in the ST-Cube.

Related Work
Spatio-Temporal Interest Points Detection
STIP Description using 3D SIFT
Class Specific Visual Word Dictionary
Bag of Expressions Generation
Spatio-Temporal Cube
Visual Expression Formation
Visual Expression Dictionary
Histogram of Visual Expressions Encoding
Action Recognition
Datasets
Feature Extraction and Parameter Tuning
Contribution of Each Stage
Comparison with the State-of-the-Art
Evaluation on the KTH Dataset
Evaluation on the UCF Sports Dataset
Evaluation on the UCF11 and UCF50 Datasets
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call