Abstract

In the current digital era, large-scale Artificial Intelligence (AI) models have fundamentally changed the landscape of the AI field. These models excel at extracting complex patterns and information from massive datasets, making them indispensable in a wide range of domains. With the advancement of large-scale AI models, pose estimation has achieved improved performance in managing large data sets and complex computations. However, most existing pose estimation methods suffer from the problem of inaccurate pose graph extraction in complex scenes. Moreover, the existing methods all focus on the message, ignoring the fact that it may not be optimal for the graph structure. This fixed graph structure is difficult to reflect the real dependency relationship between joint points. This study proposes a Character Redetection Model (CRM) to dynamically adjust the foreground extraction strategies to solve these problems. The CRM locates the target position based on the rich spatiotemporal information between the adjacent frames, effectively solving the problem of inaccurate foreground acquisition caused by occlusion and blur in complex scenes. In addition, an Uncertainty Graph Structure (UGS) is designed, which adaptively learns the topological structure of graphs with different Graph Convolutional Network (GCN) layers and skeleton samples in an end-to-end manner and mines the hidden space dependency between data to accurately model the different poses of the characters. The experimental results show that the proposed method achieves a superior performance compared to that of the State-Of-The-Art (SOTA) method, with a percentage of correct keypoints (PCK) value of 94.6% at the Max Planck Institute for Informatics (MPII), along with the mean Average Precision (mAP) values of 83.3% and 82.1% on the PoseTrack2017 and classroom pose datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call