Abstract

This paper presents a novel Multi-View Human Action Recognition Framework which relies on Self Similarity Matrix (SSM), View and Rotation invariant features of human actions. In this paper, we accomplish the Self-Similarity between frames of an input video to extract only important frames. The self-similarity evaluation helps in the reduction of total number of training frames for every view. And further to extract scale and rotation invariant features, in this work we incorporate Gabor filter with varying scales and rotations. These features are learned through Support Vector Machine algorithm to recognize human actions under multiple views. Simulations are carried out over IXMAS multi-view dataset and the results are outperforming compared to the state-of-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call