Abstract

3D hand pose estimation can be achieved by cascading a feature extraction module and a feature exploitation module, where reinforcement learning (RL) is proved to be an effective way to perform feature exploitation. This paper points out the prospects of improving accuracy using better exploitation strategy, and proposes an Adaptive Step-Critic Shared RL (ASCS-RL) strategy for accurate feature exploitation in 3D hand pose estimation. Hand joint features are exploited in a multi-task manner, and divided into two groups according to the distributions of estimation error. An RL-based adaptive-step (AS-RL) strategy is then used to obtain the optimal step size for better exploitation. The exploitation process are finally performed using a critic-shared RL (CS-RL) strategy, where both groups share a universal critic mechanism. Ablation studies and extensive experiments are carried out to evaluate the performance of ASCS-RL on ICVL and NYU datasets. The results show the strategy achieves the state-of-the-art accuracy in monocular depth-based 3D hand pose estimation, especially the best on ICVL. Experiments also validates that ASCS-RL realizes better tradeoff between accuracy and running rapidity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.