The existing key point detection network models have complex structures, so it is difficult to deploy on edge devices. Meanwhile, the convolution is localized and limited by the size of convolution kernels, which cannot effectively capture long-range dependencies. To address this problem, this paper introduces a lightweight Convolutional Transformer network (LHFormer Net) for human pose estimation. Considering that the sampling area of convolution kernels for different dimensional feature maps is fixed and the contextual information is singular, the enhanced receptive field block is designed to extract richer feature information and reduce information loss in feature maps. Based on the global modeling features of Transformer encoder, convolutional position encoding and multi-head self-attention are used to capture the spatial constraint relationship between key points in the deep feature extraction. Finally, a lightweight deconvolution module is used to generate higher resolution features to achieve multi-resolution supervision, which can effectively solve the problem of scale variation in pose estimation, more accurately locate key points of small and medium-sized people, ad further improve the detection accuracy of the network. Compared with other networks, the experimental results on the open access datasets COCO2017 and MPII show that the proposed network achieves a good balance between model complexity and detection performance.
Read full abstract