Abstract

Gait recognition refers to video-based biometric techniques for recognizing subjects by walking patterns under a long-distance situation. Despite the progress of existing gait recognition methods, the recognition ability of subjects in carrying situations is still limited (e.g., carrying bags or wearing coats or jackets). To address this issue, this paper proposes to extract human gait information from different ranges of receptive fields, providing richer internal features of the deep network. Moreover, we propose two attention mechanisms, Local Pyramid Attention and Global Attention Fusion Learning, to focus on the key features in human gait from different perspectives. Depending on the different attention mechanisms employed in the network, three network variants are derived, where the Gait Pyramid Attention Network (GPAN) contains two attention mechanisms, while GPAN-P and GPAN-L contain a single attention mechanism. We evaluated our method on two large datasets, CASIA-B and OUMVLP. Experiments show that the proposed network gives an average rank-1 accuracy of 97.8% on CASIA-B under normal walking conditions. We also achieve 94.2% and 81.8% accuracy on CASIA-B dataset under the complex bag-walking and coat-walking scenarios, which are dramatically superior to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call