Abstract

In recent years, skeleton-based human action recognition has attracted substantial attentions. However, owing to the complexity and nonlinearity of human action data, it is still a challenging task to precisely represent skeleton features. Motivated by the effectiveness of Lie Group skeletal representations in extracting human action features and the powerful capability of deep neural networks in feature learning and high-dimensional data processing, we proposed to combine Lie Group features and deep learning for human action recognition. Human skeleton information was firstly used to overcome the interference of external factors such as changes of lighting conditions and body shape. And then, Lie Group was applied to naturally represent the complex and diverse action data. Finally, we took use of convolutional neural networks to learn and classify the Lie Group features. Experiments were performed on three public datasets, and the experimental results show that our methods can achieve higher average recognition accuracy of 93.00% on Florence3D-Action, 93.68% on MSR Action Pairs, and 97.96% on UT Kinect-Action, which outperforms many of the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call