The dynamic information of the joints, such as the movement amplitude, is critical for forecasting precise human joint trajectories. Existing methods adopt global modeling in which all joints are treated as a whole to extract features for movement coordination. Though global modeling can exploit hidden relationships between joints, it also inevitably introduces undesired trajectory dependencies, which weakens the dynamic information effects and simplifies the constraints and kinetics model of joints. Therefore, we propose a dynamic pattern-based collaborative modeling framework (DPnet) that contains a keyframe enhanced module (KEM) and multi-channel feature extractor blocks (MFE-block). The KEM tackles the discontinuity between the last frame of observation and the first predicted one by duplicating the decisive frame. The MFE-block utilizes a multi-channel graph structure to enrich the dynamic information effects and recessive constraints of joints. To distinguish the dynamic information of each joint, we calculate the movement amplitude of the joints and propose three dynamic patterns, including active, inactive, and static patterns. We also propose a dynamic pattern-guided feature extractor (DP-FE) to alleviate the trajectory dependencies between joints with different dynamic patterns. We evaluate our approach on three standard benchmark datasets, including H3.6M [8], CMU-Mocap [44], and 3DPW [45]. Our approach achieves impressive results in both short-term and long-term predictions, confirming its effectiveness and efficiency.
Read full abstract