AbstractStimulated by its important applications in animation, gaming, virtual reality, augmented reality, and healthcare, 3D human pose estimation has received considerable attention in recent years. To improve the accuracy of 3D human pose estimation, most approaches have converted this challenging task into a local pose estimation problem by dividing the body joints of the human body into different groups based on the human body topology. The body joint features of different groups are then fused to predict the overall pose of the whole body, which requires a joint feature fusion scheme. Nevertheless, the joint feature fusion schemes adopted in existing methods involve the learning of extensive parameters and hence are computationally very expensive. This paper reports a new topology-based grouped method ‘EHFusion’ for 3D human pose estimation, which involves a heterogeneous feature fusion (HFF) module that integrates grouped pose features. The HFF module reduces the computational complexity of the model while achieving promising accuracy. Moreover, we introduce motion amplitude information and a camera intrinsic embedding module to provide better global information and 2D-to-3D conversion knowledge, thereby improving the overall robustness and accuracy of the method. In contrast to previous methods, the proposed new network can be trained end-to-end in one single stage. Experimental results not only demonstrate the advantageous trade-offs between estimation accuracy and computational complexity achieved by our method but also showcase the competitive performance in comparison with various existing state-of-the-art methods (e.g., transformer-based) when evaluated on two public datasets, Human3.6M and HumanEva. The data and code are available at doi:10.5281/zenodo.11113132
Read full abstract