Abstract

Optical motion capture systems have been used intensively to obtain human body poses. However, there still exist several problems. First is the dislocation problem caused by joints being too close together. The second is the joint lost problem. Restricted by severe self-occlusions, cameras may not capture the target joints. Given this observation, we investigate the high-level constraints over human poses to solve these two problems. In this work, we present a Simplified-attention Enhanced Graph Convolutional Network (SaEGC-Net) to extract both spatial and temporal features from monocular videos flexibly. The SaEGC-Net for 3D human pose estimation is U-shaped and involves the Cascaded Spatial-Temporal Graph Convolutional (CST-GC) blocks and the Simplified Spatial-Temporal Attention (SST-Att) blocks, allowing for drawing long-range dependencies between unconnected joints by graph topologies and attention mechanism, respectively. Specifically, the CST-GC block embeds two predefined graph structures into a convolutional network, incorporating discriminative features from distant joints. The proposed SST-Att block disregards redundant information by sharing part of the attention map, which is highly lightweight. It also considers dimension-expanded joint relationships to maintain the diversity of dependence. To evaluate the effectiveness of our method, we conduct extensive experiments on two datasets: Human3.6M and our own dataset FDU-Motion. Results demonstrate that our model achieves excellent performance and can competently handle the above two problems. Also, ablation studies show that our network’s submodules can better exploit the motion information of the human body.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call