Abstract

Presently in most of the real world applications like video surveillance systems, human activities are captured and retained as multimodal information for authorized permitted actions. However the degree of accuracy in recognition of such actions greatly depends on many factors, including occlusion, illumination factor, cluttered environment, and so on. In this work we propose the correlation of temporal difference frame (CTDF) algorithm which captures the local maxima’s of every small movement and its neighboring information. Temporal difference obtained between frames, block size defined to obtain the surround information and finally, the comparison of one to all points between identified frames greatly increase the accuracy. The algorithm takes in the raw video input of the standard UT interaction and BIT interaction datasets. Features extracted using the proposed algorithm is passed through variants of SVM which gives state of art results, 95.83% accuracy for UT Interaction and an accuracy of 90.4% for BIT interaction dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.