Abstract

Human Activity Recognition (HAR) is the process of identifying human actions in a specific environment. Recognizing human activities from video streams is a challenging task due to problems such as background noise, partial occlusion, changes in scale, orientation, lighting, and the unstable capturing process. Such multi-dimensional and none-linear process increases the complexity, making traditional solutions inefficient in terms of several performance indicators such as accuracy, time, and memory. This paper proposes a technique to select a set of representative features that can accurately recognize human activities from video streams, while minimizing the recognition time and memory. The extracted features are projected on a canvas, which keeps the synchronization property of the spatiotemporal information. The proposed technique is developed to select the features that refer only to progression of changes. The original RGB frames are preprocessed using background subtraction to extract the subject. Then the activity pattern is extracted through the proposed Growth method. Three experiments were conducted; the first experiment was a baseline to compare the classification task using the original RGB features. The second experiment relied on classifying activities using the proposed feature-selection method. Finally, the third experiment provided a sensitivity analysis that compares between the effect of both techniques on time and memory resources. The results indicated that the proposed method outperformed original RBG feature-selection method in terms of accuracy, time, and memory requirements.

Highlights

  • Video activity recognition is the process of identifying certain actions that represent an activity based on a collection of video-streams observations

  • The study results showed that a better performance can be achieved when the deep-learning model can discover discriminative characteristics from the depth motion history images (MHIs) of human actions

  • The computer vision research center at the University of Central Florida (UCF) has developed a video-based action recognition dataset (UCF-101) that consists of 13,320 short videos

Read more

Summary

Introduction

Video activity recognition is the process of identifying certain actions that represent an activity based on a collection of video-streams observations. To capture the basic pattern movements, we created what we intend to call a Growth method, which keeps track of the pattern’s change through consecutive frames This novel technique modeled motion features explicitly, while avoiding the negative effects of different body shapes, sizes, and other parts of irrelevant aspects that might affect the motion estimation. The proposed methodology, in this paper, will simplify the neural network classifiers and drive it to process only relevant features to distinguish among different activities This new vision to capture motion patterns will minimize the complexity of designing deep-learning architectures to learn motion features. The proposed technique is efficient since it achieves acceptable classification accuracy and minimizes the time and memory requirements This novel method of activity pattern representation is a suitable alternative to represent temporal information in videos instead of motion estimation techniques.

Related Work
Background
Movement Feature Selection
Figures and illustrate the movement pattern
10 End Function
Datasets
Experiment Setup
Experiment Results and Discussion
Performance Indicators
Running Time and Memory
5.41 During this experiment, MB that
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call