In the realm of daily human interactions, a rich tapestry of behaviors and actions is observed, encompassing a wealth of informative cues. In the era of burgeoning big data, extensive repositories of images and videos have risen to prominence as the primary conduits for disseminating information. Grasping the intricacies of human behaviors depicted within these multimedia contexts has evolved into a pivotal quandary within the domain of computer vision. The technology of behavior recognition finds its practical application across domains such as human-computer interaction, intelligent surveillance, and anomaly detection, exhibiting a robust blend of pragmatic utility and scholarly significance. The present study introduces an innovative human body behavior recognition framework anchored in skeleton sequences and multi-stream fused spatiotemporal graph convolutional networks. Developed upon the foundation of graph convolutional networks, this method encompasses three pivotal refinements tailored to ameliorate extant challenges. First and foremost, in response to the complex task of capturing distant interdependencies among nodes within graph convolutional networks, we incorporate a spatial attention module. This module adeptly encapsulates long-term node interdependencies via precision-laden positional information, thus engendering interconnections that span diverse temporal and spatial contexts. Subsequently, to elevate the discernment of channel information within the network and to optimize the allocation of attention across distinct channels, we introduce a channel attention mechanism. This augmentation fortifies the discernment of motion-related features. Lastly, confronting the lacuna of information gaps prevalent within single-stream data, we deploy a multi-stream fusion methodology to fortify model outputs, ultimately fostering more precise prognostications concerning action classifications. Empirical results bear testament to the efficacy of the proposed multi-stream fused spatiotemporal graph convolutional network paradigm for skeleton-centric behavior recognition, evincing a pinnacle recognition accuracy of 96.0% on the expansive NTU-RGB+D skeleton dataset, alongside a zenithal accuracy of 37.3% on the Kinetics-Skeleton dataset—emanating from RGB data and furthered through pose estimation.
Read full abstract