Abstract

To address the problem of poor detection and under-utilization of the spatial relationship between nodes in human pose estimation, a method based on an improved spatial temporal graph convolutional network (ST-GCN) model is proposed. Firstly, upsampling and segmented random sampling strategies are used to effectively solve the problems of class imbalance and the large sequence length of the dataset. Secondly, an improved detection transformer (DETR) structure is added to effectively suppress the generation of non-maximal suppression (NMS) and anchor points, a multi-head attention (M-ATT) module is introduced into each ST-GCN cell to capture richer feature information, and a residual module is introduced into the 9th ST-GCN cell to avoid possible network degradation in deep networks. In addition, strategies such as warmup, regularization, loss functions, and optimizers are configured to improve the model’s performance. The experimental results show that the average percentage of correct keypoints (PCK) of this method are 93.2% and 92.7% for the FSD and MPII datasets, respectively, which is 1.9% and 1.7% higher than the average PCK of the original ST-GCN method. Moreover, the confusion matrix corresponding to this method also indicated that the model has high recognition accuracy. In addition, comparison experiments with ST-GCN and other methods show that the computation of the model corresponding to this method is about 1.7 GFLOPs and the corresponding MACs are about 6.4 GMACs, which is a good performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.