Abstract
Gait recognition is an emerging long-distance biometric technology applied in many fields, including video surveillance. The most recent gait recognition methods treat human silhouettes as global or local regions to extract the gait properties. However, the global approach may cause the fine-grained differences of limbs to be ignored, whereas the local approach focuses only on the details of body parts and cannot consider the correlation between adjacent regions. Moreover, as a multi-view task, view changes have a significant impact on the integrity of the silhouette, which necessitates considering the disturbances brought about by the view itself. To address these problems, this paper proposes a novel gait recognition framework, namely, gait aggregation multi-feature representation (GaitAMR), to extract the most discriminative subject features. In GaitAMR, we propose a holistic and partial temporal aggregation strategy, that extracts body movement descriptors, both globally and locally. Besides, we use the optimal view features as supplementary information for spatiotemporal features, and thus enhance the view stability in the recognition process. By effectively aggregating feature representations from different domains, our method enhances the discrimination of gait patterns between subjects. Experimental results on public gait datasets show that GaitAMR improves gait recognition in occlusion conditions, outperforming state-of-the-art methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.