Abstract

At present, the existing gait recognition systems are focusing on developing methods to extract robust gait feature from silhouette images and they indeed achieved great success. However, gait can be sensitive to appearance features such as clothing and carried items. Compared with appearance-based method, model-based gait recognition is promising due to the robustness against some variations, such as clothing and baggage carried. With the development of human pose estimation, the difficulty of model-based methods is mitigated in recent years. We leverage recent advances in action recognition to embed human pose sequence to a vector and introduce Spatial Temporal Graph Convolution Blocks (STGCB) which has been commonly used in action recognition for gait recognition. Furthermore, we build the velocity and bone’s angle features to enrich the input of network. Experiments on the popular OUMVLP-Pose gait dataset show that our method archives state-of-the-art (SOTA) performance in model-based gait recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call