Abstract

Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on the fine-grained level. However, the spatial-temporal correlations of gait video signals are also lost in the decoupling process. Direct 3D convolution approaches can retain such correlations, but they also introduce unnecessary interferences. Instead of common 3D convolution solutions, this paper proposes an integration of decoupling process into a 3D convolution framework for cross-view gait recognition. In particular, a novel block consisting of a Parallel-insight Convolution layer integrated with a Spatial-Temporal Dual-Attention (STDA) unit is proposed as the basic block for global spatial-temporal information extraction. Under the guidance of the STDA unit, this block can well integrate spatial-temporal information extracted by two decoupled models and at the same time retain the spatial-temporal correlations. In addition, a Multi-Scale Salient Feature Extractor is proposed to further exploit the fine-grained features through context awareness extension of part-based features and adaptively aggregating the spatial features. Extensive experiments on three popular gait datasets, namely CASIA-B, OULP and OUMVLP, demonstrate that the proposed method outperforms state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.