Abstract

Although Deep Convolutional Neural Networks (DCNNs) facilitate the evolution of 3D human pose estimation, ambiguity remains the most challenging problem in such tasks. Inspired by the Human Perception Mechanism (HPM), we propose an image-to-pose coding method to fill the gap between image cues and 3D poses, thereby alleviating the ambiguity of 3D human pose estimation. First, in 3D pose space, we divide the whole 3D pose space into multiple subregions named pose codes , turning a disambiguation problem into a classification problem. The proposed coding mechanism covers multiple camera views and provides a complete description for 3D pose space. Second, it is noteworthy that the articulated structure of the human body lies on a sophisticated product manifold and the error accumulation in the chain structure will undoubtedly affect the coding performance. Therefore, in image space, we extract the image cues from independent local image patches rather than the whole image. The mapping relationship between image cues and 3D pose codes is established by a set of DCNNs. The image-to-pose coding method transforms the implicit image cues into explicit constraints. Finally, the image-to-pose coding method is integrated into a linear matching mechanism to construct a 3D pose estimation method that effectively alleviates the ambiguity. We conduct extensive experiments on widely used public benchmarks. The experimental results show that our method effectively alleviates the ambiguity in 3D pose recovery and is robust to the variations of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call