Abstract

Action recognition has been extensively researched in computer vision due to its potential applications in a broad range of areas. The key to action recognition lies in modeling actions and measuring their similarity, which however poses great challenges. In this paper, we propose learning match kernels between actions on Grassmann manifold for action recognition. Specifically, we propose modeling actions as a linear subspace on the Grassmann manifold; the subspace is a set of convolutional neural network (CNN) feature vectors pooled temporally over frames in semantic video clips, which simultaneously captures local discriminant patterns and temporal dynamics of motion. To measure the similarity between actions, we propose Grassmann match kernels (GMK) based on canonical correlations of linear subspaces to directly match videos for action recognition; GMK is learned in a supervised way via kernel target alignment, which is endowed with a great discriminative ability to distinguish actions from different classes. The proposed approach leverages the strengths of CNNs for feature extraction and kernels for measuring similarity, which accomplishes a general learning framework of match kernels for action recognition. We have conducted extensive experiments on five challenging realistic data sets including Youtube, UCF50, UCF101, Penn action, and HMDB51. The proposed approach achieves high performance and substantially surpasses the state-of-the-art algorithms by large margins, which demonstrates the great effectiveness of proposed approach for action recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.