Abstract
At present, the existing gait recognition systems are focusing on developing methods to extract robust gait feature from silhouette images and they indeed achieved great success. However, gait can be sensitive to appearance features such as clothing and carried items. Compared with appearance-based method, model-based gait recognition is promising due to the robustness against some variations, such as clothing and baggage carried. With the development of human pose estimation, the difficulty of model-based methods is mitigated in recent years. We leverage recent advances in action recognition to embed human pose sequence to a vector and introduce Spatial Temporal Graph Convolution Blocks (STGCB) which has been commonly used in action recognition for gait recognition. Furthermore, we build the velocity and bone’s angle features to enrich the input of network. Experiments on the popular OUMVLP-Pose gait dataset show that our method archives state-of-the-art (SOTA) performance in model-based gait recognition.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.