Abstract

Skeleton-based action recognition is a core task in the field of video understanding. Skeleton sequences are characterized by high information density, low redundancy, and clear structural information, thereby facilitating the analysis of complex relationships among human behaviors more readily than other modalities. Although existing studies have encoded skeleton data and achieved positive outcomes, they have often overlooked the precise high-level semantic information inherent in the action descriptions. To address this issue, this paper proposes a prompt-supervised dynamic attention graph convolutional network (PDA-GCN). Specifically, the PDA-GCN incorporates a prompt supervision (PS) module that leverages a pre-trained large-scale language model (LLM) as a knowledge engine and retains the generated text features as prompts to provide additional supervision during model training, enhancing the model’s ability to discern analogous actions with negligible computational cost. In addition, for the purpose of bolstering the learning of discriminative features, a dynamic attention graph convolution (DA-GC) module is presented. This module utilizes self-attention mechanism to adaptively infer intrinsic relationships between joints and integrates dynamic convolution to strengthen the emphasis on local information. This dual focus on both global context and local details further amplifies the efficiency and effectiveness of the model. Extensive experiments, conducted on the widely-used skeleton-based action recognition datasets NTU RGB+D 60 and NTU RGB+D 120, demonstrate that the PDA-GCN surpasses known state-of-the-art methods, achieving accuracies of 93.4% on the NTU RGB+D 60 cross-subject split and 90.7% on the NTU RGB+D 120 cross-subject split.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.