Abstract

The machine learning research community is presently working on human action/activity recognition issue in real-time videos, and facing several hundreds of confronts. In this scenario, deep convolutional neural networks have initiated their powerful role in strengthen the numerous vision-based HAR systems. In recent years there has been impressive performance and great potential for imaging tasks with introducing residual connections along with a traditional CNN model in a single architecture known as the Residual Network (ResNet). In this paper we propose to use skeletal trajectory maps for the detection of human actions. A new ResNet based algorithm named dense ResNet has been proposed to perform the classification task. The trajectories of 3D joint locations are converted into color coded RGB images. These trajectory plotted images are able to capture the spatio-temporal evolutions of 3D motions from skeleton sequences and can be efficiently learned by deep learning algorithms. We then train the proposed dense ResNet to learn the features from these color coded RGB trajectory information of the human body 3D joint locations. The novelty of the proposed method is evaluated on MSR Action 3D, UTKinect-Action3D, G3D and NTU RGB-D datasets. Experimental results shows that the proposed architecture attains good recognition rates with less computation resource.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.