Skeleton-based action recognition is a core task in the field of video understanding. Skeleton sequences are characterized by high information density, low redundancy, and clear structural information, thereby facilitating the analysis of complex relationships among human behaviors more readily than other modalities. Although existing studies have encoded skeleton data and achieved positive outcomes, they have often overlooked the precise high-level semantic information inherent in the action descriptions. To address this issue, this paper proposes a prompt-supervised dynamic attention graph convolutional network (PDA-GCN). Specifically, the PDA-GCN incorporates a prompt supervision (PS) module that leverages a pre-trained large-scale language model (LLM) as a knowledge engine and retains the generated text features as prompts to provide additional supervision during model training, enhancing the model’s ability to discern analogous actions with negligible computational cost. In addition, for the purpose of bolstering the learning of discriminative features, a dynamic attention graph convolution (DA-GC) module is presented. This module utilizes self-attention mechanism to adaptively infer intrinsic relationships between joints and integrates dynamic convolution to strengthen the emphasis on local information. This dual focus on both global context and local details further amplifies the efficiency and effectiveness of the model. Extensive experiments, conducted on the widely-used skeleton-based action recognition datasets NTU RGB+D 60 and NTU RGB+D 120, demonstrate that the PDA-GCN surpasses known state-of-the-art methods, achieving accuracies of 93.4% on the NTU RGB+D 60 cross-subject split and 90.7% on the NTU RGB+D 120 cross-subject split.
Read full abstract