Abstract

The ability to capture joint connections in complicated motion is essential for skeleton-based action recognition. However, earlier approaches may not be able to fully explore this connection in either the spatial or temporal dimension due to fixed or single-level topological structures and insufficient temporal modeling. In this paper, we propose a novel multilevel spatial-temporal excited graph network (ML-STGNet) to address the above problems. In the spatial configuration, we decouple the learning of the human skeleton into general and individual graphs by designing a multilevel graph convolution (ML-GCN) network and a spatial data-driven excitation (SDE) module, respectively. ML-GCN leverages joint-level, part-level, and body-level graphs to comprehensively model the hierarchical relations of a human body. Based on this, SDE is further introduced to handle the diverse joint relations of different samples in a data-dependent way. This decoupling approach not only increases the flexibility of the model for graph construction but also enables the generality to adapt to various data samples. In the temporal configuration, we apply the concept of temporal difference to the human skeleton and design an efficient temporal motion excitation (TME) module to highlight the motion-sensitive features. Furthermore, a simplified multiscale temporal convolution (MS-TCN) network is introduced to enrich the expression ability of temporal features. Extensive experiments on the four popular datasets NTU-RGB+D, NTU-RGB+D 120, Kinetics Skeleton 400, and Toyota Smarthome demonstrate that ML-STGNet gains considerable improvements over the existing state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.