Abstract

In skeleton-based action recognition, the graph convolutional network (GCN) has achieved great success. Modelling skeleton data in a suitable spatial-temporal way and designing the adjacency matrix are crucial aspects for GCN-based methods to capture joint relationships. In this study, we propose the spatial-temporal slowfast graph convolutional network (STSF-GCN) and design the adjacency matrices for the skeleton data graphs in STSF-GCN. STSF-GCN contains two pathways: (1) the fast pathway is in a high frame rate, and joints of adjacent frames are unified to build ‘small’ spatial-temporal graphs. A new spatial-temporal adjacency matrix is proposed for these ‘small’ spatial-temporal graphs. Ablation studies verify the effectiveness of the proposed adjacency matrix. (2) The slow pathway is in a low frame rate, and joints from all frames are unified to build one ‘big’ spatial-temporal graph. The adjacency matrix for the ‘big’ spatial-temporal graph is obtained by computing self-attention coefficients of each joint. Finally, outputs from two pathways are fused to predict the action category. STSF-GCN can efficiently capture both long-range and short-range spatial-temporal joint relationships. On three datasets for skeleton-based action recognition, STSF-GCN can achieve state-of-the-art performance with much less computational cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call