Abstract
This study proposes a novel human action recognition method using regularized multi-task learning. First, we propose the part Bag-of-Words (PBoW) representation that completely represents the local visual characteristics of the human body structure. Each part can be viewed as a single task in a multi-task learning formulation. Further, we formulate the task of multi-view human action recognition as a learning problem penalized by a graph structure that is built according to the human body structure. Our experiments show that this method has significantly better performance in human action recognition than the standard Bag-of-Words+Support Vector Machine (BoW+SVM) method and other state-of-the-art methods. Further, the performance of the proposed method with simple global representation is as good as that of state-of-the-art methods for human action recognition on the TJU dataset (a new multi-view action dataset with RGB, depth, and skeleton data, which has been created by our group).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.