AbstractSkeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this task, leading to a remarkable improvement in prediction accuracy. However, most GCN‐based methods overlook the varying contributions of self, centripetal and centrifugal subsets. Besides, only a single‐scale temporal feature is adopted, and the multi‐temporal scale information is ignored. To this end, firstly, in order to differentiate the importance of different skeleton subsets, we develop a refinement graph convolution, which can adaptively learn a weight for each subset feature. Secondly, a multi‐temporal scale aggregation module is proposed to extract more discriminative temporal dynamic information. Furthermore, a multi‐temporal scale aggregation refinement graph convolutional network (MTSA‐RGCN) is proposed, and four‐stream structure is also adopted in this paper, which can comprehensively model complementary features and eventually achieves a significant performance boost. In the empirical experiments, the performance of our approach has been greatly improved on both NTU‐RGB+D 60 and NTU‐RGB+D 120 datasets, compared to other state‐of‐the‐art methods.