Abstract

Human skeleton sequences provide important information for human action recognition. Compared with video data, it has lower sensitivity to light and lower data redundancy. Many traditional skeleton-based methods generally believe that all joints in a skeleton sequence are of the same contribution for an action. This paper proposes a novel action recognition method based on an attention and temporal graph convolutional network (ATGCN). The network could automatically learn spatial and temporal features from skeleton data. The attention mechanism is applied to weight joints to obtain their contributions in an action. Experiments conducted NTU RGB+D and WorkoutUOW-18 datasets have demonstrated the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call