Abstract
In the automated machinery world, human-computer interaction plays a major role, in which human actions or activities are major key components for a better performance. Hence, activity recognition becomes an active research area, where a lot of algorithms and methods were evolved quickly. Still there is lot of challenges like view occlusions, clothing and speed of action sequence. This paper proposes a novel classifier with skeleton features, to recognize human activities. This work integrates fuzzy with Dragon Deep Belief Network in order to improve the accuracy in complex activities. From the given input videos, initially key frames are selected to extract sufficient features for classification using SSIM (Structure Similarity Measure) then Scale Invariant Feature Transform (SIFT) has been applied to selected Spatial Invariant features and Spatio-Temporal interest points has also extracted to retain temporal features. Finally, combined spatial and temporal features are used for training and testing the classifier. For implementation, input videos are chosen from two common datasets, namely KTH, Weizienam. Detailed performance analyses were done with various actions like walking, running and boxing. The proposed work improves the performance in accuracy better with the maximum range up to 1.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.