Abstract

Transformer-based methods have emerged as the golden standard in 2D-3D human pose estimation from video sequences, largely thanks to their powerful spatial–temporal feature encoders. In the past, researchers have made concerted efforts to engineer spatial and temporal encoders using transformer blocks. This approach involved a dramatic reshaping of the input, transforming it from mere joint information to dynamic joint trajectories. Despite this, the inherent limitations of the spatial–temporal structure have resulted in an inadequate acquisition and subsequent utilization of temporal information. In an attempt to rectify this prevalent issue, our paper proposes a new model, dubbed Spatial–Temporal-ReTemporal Transformer (i.e., STRFormer). This model ingeniously employs two separate temporal transformer blocks to extract the essential temporal motion information from video sequences. Intriguingly, one temporal transformer block is dedicated to the original video sequence, while the other concerns itself with the reversed order video. This novel approach allows for a more thorough investigation and utilization of temporal information from the video sequences. In order to alternate the processing of these two blocks effectively with the spatial block, we focus on maximizing the extraction of temporal domain information. This method leads to a more comprehensive understanding of the pose estimation and its evolution over time. Furthermore, we introduce a novel error metric, Mean Per-Joint Position Acceleration Error (i.e., MPJAE). This advanced metric takes into account the body part velocity in adjacent predicted frames, allowing for a more detailed evaluation of the predicted poses. We conduct extensive experiments on various open benchmarks to evaluate the effectiveness of our proposed model. The results demonstrate that our STRFormer, coupled with the MPJAE loss, achieves highly competitive results when compared with other state-of-the-art models. This illustrates its promising potential and practical applicability in 2D-3D human pose estimation tasks. We plan to release our code publicly for further research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.