Abstract

As the prosperity of low-cost and easy-operating depth cameras, skeleton-based human action recognition has been extensively studied recently. However, most of the existing methods partially consider that all 3D joints of a human skeleton are identical. Actually, these 3D joints exhibit diverse responses to different action classes, and some joint configurations are more discriminative to distinguish a certain action. In this paper, we propose a discriminative multi-instance multitask learning (MIMTL) framework to discover the intrinsic relationship between joint configurations and action classes. First, a set of discriminative and informative joint configurations for the corresponding action class is captured in multi-instance learning model by regarding the action and the joint configurations as a bag and its instances, respectively. Then, a multitask learning model with group structure constraints is exploited to further reveal the intrinsic relationship between the joint configurations and different action classes. We conduct extensive evaluations of MIMTL using three benchmark 3D action recognition datasets. Experimental results show that our proposed MIMTL framework performs favorably compared with several state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.