The machine learning research community is presently working on human action/activity recognition issue in real-time videos, and facing several hundreds of confronts. In this scenario, deep convolutional neural networks have initiated their powerful role in strengthen the numerous vision-based HAR systems. In recent years there has been impressive performance and great potential for imaging tasks with introducing residual connections along with a traditional CNN model in a single architecture known as the Residual Network (ResNet). In this paper we propose to use skeletal trajectory maps for the detection of human actions. A new ResNet based algorithm named dense ResNet has been proposed to perform the classification task. The trajectories of 3D joint locations are converted into color coded RGB images. These trajectory plotted images are able to capture the spatio-temporal evolutions of 3D motions from skeleton sequences and can be efficiently learned by deep learning algorithms. We then train the proposed dense ResNet to learn the features from these color coded RGB trajectory information of the human body 3D joint locations. The novelty of the proposed method is evaluated on MSR Action 3D, UTKinect-Action3D, G3D and NTU RGB-D datasets. Experimental results shows that the proposed architecture attains good recognition rates with less computation resource.