Abstract

Recently, action recognition has attracted considerable attention in the field of computer vision. In dynamic circumstances and complicated backgrounds, there are some problems, such as object occlusion, insufficient light, and weak correlation of human body joints, resulting in skeleton-based human action recognition accuracy being very low. To address this issue, we propose a Multi-View Time-Series Hypergraph Neural Network (MV-TSHGNN) method. The framework is composed of two main parts: the construction of a multi-view time-series hypergraph structure and the learning process of multi-view time-series hypergraph convolutions. Specifically, given the multi-view video sequence frames, we first extract the joint features of actions from different views. Then, limb components and adjacent joints spatial hypergraphs based on the joints of different views at the same time are constructed respectively, temporal hypergraphs are constructed joints of the same view at continuous times, which are established high-order semantic relationships and cooperatively generate complementary action features. After that, we design a multi-view time-series hypergraph neural network to efficiently learn the features of spatial and temporal hypergraphs, and effectively improve the accuracy of skeleton-based action recognition. To evaluate the effectiveness and efficiency of MV-TSHGNN, we conduct experiments on NTU RGB+D, NTU RGB+D 120 and imitating traffic police gestures datasets. The experimental results indicate that our proposed method model achieves the new state-of-the-art performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.