Human action recognition is still an uncertain computer vision problem, which could be solved by a robust action descriptor. As a solution, we proposed an action recognition descriptor using only the 3D skeleton joint’s points to perform this unsettle task. Joint’s point interrelationships and frame-frame interrelationships are presented, which is a solution backbone to achieve human action recognition. Here, many joints are related to each other, and frames depend on different frames while performing any action sequence. Joints point spatial information calculates using angle, joint’s sine relation, and distance features, whereas joints point temporal information estimates from frame-frame relations. Experiments are performed over four publicly available databases, i.e., MSR Daily Activity 3D Dataset, UTD Multimodal Human Action Dataset, KARD- Kinect Activity Recognition Dataset, and SBU Kinect Interaction Dataset, and proved that proposed descriptor outperforms as a comparison to state-of-the-art approaches on entire four datasets. Angle, Sine relation, and Distance features are extracted using interrelationships of joints and frames (ASD-R). It is all achieved due to accurately detecting spatial and temporal information of the Joint’s points. Moreover, the Support Vector Machine classifier supports the proposed descriptor to identify the right classification precisely.