Abstract
In this paper, we propose an innovative approach for transforming 2D human pose estimation into 3D models using Single Input–Single Output (SISO) Ultra-Wideband (UWB) radar technology. This method addresses the significant challenge of reconstructing 3D human poses from 1D radar signals, a task traditionally hindered by low spatial resolution and complex inverse problems. The difficulty is further exacerbated by the ambiguity in 3D pose reconstruction, as multiple 3D poses may correspond to similar 2D projections. Our solution, termed the Radar PoseLifter network, leverages the micro-Doppler signatures inherent in 1D radar echoes to effectively convert 2D pose information into 3D structures. The network is specifically designed to handle the long-range dependencies present in sequences of 2D poses. It employs a fully convolutional architecture, enhanced with a dilated temporal convolutions network, for efficient data processing. We rigorously evaluated the Radar PoseLifter network using the HPSUR dataset, which includes a diverse range of human movements. This dataset comprises data from five individuals with varying physical characteristics, performing a variety of actions. Our experimental results demonstrate the method’s robustness and accuracy in estimating complex human poses, highlighting its effectiveness. This research contributes significantly to the advancement of human motion capture using radar technology. It presents a viable solution for applications where precision and reliability in motion capture are paramount. The study not only enhances the understanding of 3D pose estimation from radar data but also opens new avenues for practical applications in various fields.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.