Abstract

Forecasting 3-dimensional skeleton-based human poses from the historical sequence is a classic task, which shows enormous potential in robotics, computer vision, and graphics. Currently, the state-of-the-art methods resort to graph convolutional networks (GCNs) to access the relationships of human joint pairs to formulate this problem. However, human action involves complex interactions among multiple joints, which presents a higher-order correlation overstepping the pairwise (2-order) connection of GCNs. Moreover, joints are typically activated by the parent joint, rather than driving their parent joints, whereas in existing methods, this specific direction of information transmission is ignored. In this work, we propose a novel hybrid directed hypergraph convolution network (H-DHGCN) to model the high-order relationships of the human skeleton with directionality. Specifically, our H-DHGCN mainly involves 2 core components. One is the static directed hypergraph, which is pre-defined according to the human body structure, to effectively leverage the natural relations of human joints. The second is dynamic directed hypergraph (D-DHG). D-DHG is learnable and can be constructed adaptively, to learn the unique characteristics of the motion sequence. In contrast to the typical GCNs, our method brings a richer and more refined topological representation of skeleton data. On several large-scale benchmarks, experimental results show that the proposed model consistently surpasses the latest techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call