Abstract
Gait recognition provides a more convenient way for human identification, as it can identify person with less cooperation and intrusion compared to other biometric features. Current gait recognition frameworks utilize a template to extract temporal feature or regard the whole person as a unit, and they obtain limited temporal information and fine-grained features. To overcome this problem, we propose a network consisting of two parts: Temporal Feature Fusion (TFF) and Fine-grained Feature Extraction (FFE). First, we extract the most representative temporal information from raw gait sequences by TFF. Next, we use the idea of partial features on fused temporal features to extract more fine-grained spatial block features. It is worth mentioning that the proposed algorithm provides an effective feature extraction framework for complex gait recognition, as it focuses on the temporal fusion for representative information, and the extraction of the fine-grained spatial features. Extensive experiments illustrated that we have an outstanding performance on CASIA-B and mini-OUMVLP compared to other state-of-the-art methods including GaitSet and GaitNet. In particularly, the average rank-1 accuracy of all probe views on normal walking condition (NM) achieve 95.7%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.