Abstract

AbstractSkeleton‐based neural networks have been considered a focus for human action recognition (HAR). It is noteworthy that the existing skeleton‐based methods are not capable of combining the spatial and temporal features reasonably to derive more effective high‐level representations, and it continues to be a challenging task of learning and representing the skeleton action discriminatively. In this study, a novel two‐stream spatiotemporal network (TSTN) is proposed, which is capable of processing the spatial and temporal features respectively and collectively to achieve a better representation and understanding of human action. The temporal branch stacks three gate recurrent unit (GRU) blocks in a new architecture to encode the temporal correlations from different aspects of human action, achieving high‐level temporal semantic feature expressions. The spatial branch encodes the spatial features with multi‐stacked graph convolutional network (GCN) blocks. Self‐attention mechanisms incorporated with the graph structure of the skeleton are explored to add weight influence and structural hints to further enhance the performance. The experimental results verify the effectiveness and superiority of the proposed model in skeleton action recognition; the model reaches state‐of‐the‐art on specific datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call