Due to the superior capability to process the topology of graphs, graph convolutional networks are gaining popularity in the field of action recognition based on skeleton data. However, it remains difficult to effectively extract features with more distinguishing information for both spatial and temporal dimension. A novel multidimensional adaptive dynamic temporal graph convolutional network (MADT-GCN) model for skeleton-based action recognition is proposed in this work. It consists of two modules, one multidimensional adaptive graph convolutional network (MD-AGCN) module and one dynamic temporal convolutional network (DY-TCN) module. Firstly, MD-AGCN has the ability to adaptively change the graph topology in accordance with varieties of the layers and multidimensional information of spatial, temporal, and channel dimensions that are contained in various action samples to capture the complex connections of each couple of joints. Then, DY-TCN is proposed in order to boost the representation capability to capture expressive temporal features. Moreover, the information of both the joints and bones, together with their motion information, are simultaneously modeled in a multi-stream framework, which shows notable improvements in recognition accuracy. Finally, extensive experiments are conducted on two standard datasets, NTU-RGB+D and NTU-RGB+D 120. The experimental results demonstrate the effectiveness of the proposed method.
Read full abstract