Abstract

Human activity recognition in videos is important for content-based videos indexing, intelligent monitoring, human-machine interaction, and virtual reality. This paper uses the low-level feature-based framework for human activity recognition which includes feature extraction and descriptor computing, early multi-feature fusion, video representation, and classification. This paper improves the first two steps. We propose a spatio-temporal bigraph-based multi-feature fusion algorithm to capture the useful visual information for recognition. Meanwhile, we introduce a compressed spatio-temporal video representation to bag of words representation. Our experiments on two popular datasets show efficient performance.

Highlights

  • Automatic recognition of human actions in video automatically is a promising technology in computer vision

  • The classical BoW representation firstly clusters the features to several visual vocabulary, encodes a video clip to the histogram of

  • Our contributions are as follows: (1) We proposed a bigraph multi-feature fusion method to model the spatio-temporal cue between visual words

Read more

Summary

Introduction

Automatic recognition of human actions in video automatically is a promising technology in computer vision. One of the most popular frameworks for human action recognition includes four steps: feature extraction, video representation, multi-feature fusion, and classification. We mainly focus on improving two steps: video representation and multifeature fusion. BoW (bag of words) is one of the most popular methods for video representation. Much research is based on the classical BoW representation [2,3,4,5]. The classical BoW representation firstly clusters the features to several visual vocabulary (e.g., the KMEANS method), encodes a video clip to the histogram of

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.