Abstract

Gait recognition provides a more convenient way for human identification, as it can identify person with less cooperation and intrusion compared to other biometric features. Current gait recognition frameworks utilize a template to extract temporal feature or regard the whole person as a unit, and they obtain limited temporal information and fine-grained features. To overcome this problem, we propose a network consisting of two parts: Temporal Feature Fusion (TFF) and Fine-grained Feature Extraction (FFE). First, we extract the most representative temporal information from raw gait sequences by TFF. Next, we use the idea of partial features on fused temporal features to extract more fine-grained spatial block features. It is worth mentioning that the proposed algorithm provides an effective feature extraction framework for complex gait recognition, as it focuses on the temporal fusion for representative information, and the extraction of the fine-grained spatial features. Extensive experiments illustrated that we have an outstanding performance on CASIA-B and mini-OUMVLP compared to other state-of-the-art methods including GaitSet and GaitNet. In particularly, the average rank-1 accuracy of all probe views on normal walking condition (NM) achieve 95.7%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call