Abstract

Gait recognition is an emerging long-distance biometric technology applied in many fields, including video surveillance. The most recent gait recognition methods treat human silhouettes as global or local regions to extract the gait properties. However, the global approach may cause the fine-grained differences of limbs to be ignored, whereas the local approach focuses only on the details of body parts and cannot consider the correlation between adjacent regions. Moreover, as a multi-view task, view changes have a significant impact on the integrity of the silhouette, which necessitates considering the disturbances brought about by the view itself. To address these problems, this paper proposes a novel gait recognition framework, namely, gait aggregation multi-feature representation (GaitAMR), to extract the most discriminative subject features. In GaitAMR, we propose a holistic and partial temporal aggregation strategy, that extracts body movement descriptors, both globally and locally. Besides, we use the optimal view features as supplementary information for spatiotemporal features, and thus enhance the view stability in the recognition process. By effectively aggregating feature representations from different domains, our method enhances the discrimination of gait patterns between subjects. Experimental results on public gait datasets show that GaitAMR improves gait recognition in occlusion conditions, outperforming state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call