Abstract

The objective of this paper is to propose a new approach for video-based human action and activity recognition using effective feature extraction and classification methodology. Initially, the video sequence containing human action and activity is segmented to extract the human silhouette. The extraction of the human silhouette is done using texture-based segmentation approach, and subsequently, the average energy images (AEI) of human activities are formed. To represent these images, the shape-based spatial distribution of gradients and view independent features are computed. The robustness of the spatial distribution of gradients feature is strengthened by incorporating the additional features at various views and scale which is computed using Gabor wavelet. Finally, these features are fused and result a robust descriptor. The performance of the descriptor is evaluated on publicly available datasets. The highest recognition accuracy achieved using SVM classifier is compared with similar state-of-the-art and demonstrated the superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call