Human action recognition has been widely used in fields such as human–computer interaction and virtual reality. Despite significant progress, existing approaches still struggle with effectively integrating hierarchical information and processing data beyond a certain frame count. To address these challenges, we introduce the Multi-AxisFormer (MAFormer) model, which is organized in terms of spatial, temporal, and channel dimensions of the action sequence, thereby enhancing the model’s understanding of correlations and intricate structures among and within features. Drawing on the Transformer architecture, we propose the Cross-channel Spatio-temporal Aggregation (CSA) structure for more refined feature extraction and the Multi-Axis Attention (MAA) module for more comprehensive feature aggregation. Moreover, the integration of Rotary Position Embedding (RoPE) boosts the model’s extrapolation and generalization abilities. MAFormer surpasses the known state-of-the-art on multiple skeleton-based action recognition benchmarks with the accuracy of 93.2% on NTU RGB+D 60 cross-subject split, 89.9% on NTU RGB+D 120 cross-subject split, and 97.2% on N-UCLA, offering a novel paradigm for hierarchical modeling in human action recognition.
Read full abstract