Abstract

The objective of this review article is to study the spatio-temporal approaches for addressing the key issues such as multi-view, cluttering, jitter and occlusion in recognition of human action. Based on high-level action units, a new sparse model was developed for recognition of human action in static background. Relevant to multi-camera view, a negative space approach for identifying actions taken from different viewing angles was proposed. An approach was based on space-time quantities was proposed to acquire the changes of the action instead of camera motion. This space-time based approach has handled both cluttering and camera jitter. In static background, a sparse model presented for recognition of human action acquires the fact that actions from the same class share same units. The presented method was assessed on numerous public data sets. This method has achieved a recognition rate of 95.49% in KTH dataset and 89% in UCF datasets. Based on negative space, a region based method was offered. This approach has managed the issue of long shadows in human action recognition. The approach was assessed by most common datasets and has attained higher precision than contemporary techniques. An approach based on space-time quantities was proposed to manage cluttering. This approach achieves a recognition rate of 93.18% in KTH dataset and 81.5% in UCF dataset. To handle occlusion, a model was presented with spatial and temporal consistency. The algorithm was appraised on an outdoor dataset with background clutter and a standard indoor dataset (HumanEva-I). Results were matched with advanced pose estimation algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.