Abstract
Most of the existing action recognition methods represent actions as bags of space-time interest points. Specifically, space-time interest points are detected from the video and described using appearance-based descriptors. Each descriptor is then classified as a video-word and a histogram of these video-words is used for recognition. These methods therefore rely solely on the discriminative power of individual local space-time descriptors, whilst ignoring the potentially useful information about the global spatio-temporal distribution of interest points. In this paper we propose a novel action representation method which differs significantly from the existing interest point based representation in that only the global distribution information of interest points is exploited. In particular, holistic features from clouds of interest points accumulated over multiple temporal scales are extracted. Since the proposed spatio-temporal distribution representation contains different but complementary information to the conventional Bag of Words representation, we formulate a feature fusion method based on Multiple Kernel Learning. Experiments using the KTH and WEIZMANN datasets demonstrate that our approach outperforms most existing methods, in particular under occlusion and changes in view angle, clothing, and carrying condition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.