Abstract

Gait has unique physiological characteristics and supports long-distance recognition, so gait recognition is ideal for areas such as home security and identity detection. Methods using graph convolutional networks usually extract features in the spatial and temporal dimensions by stacking GCNs and TCNs, but different joints are interconnected at different moments, so splitting the spatial and temporal dimensions can cause the loss of gait information. Focus on this problem, we propose a gait recognition network, Multi-scale Spatio-Temporal Gait (MST-Gait), which can learn multi-scale gait information simultaneously from spatial and temporal dimensions. We design a multi-scale spatio-temporal groups Transformer (MSTGT) to model the correlation of intra-frame and inter-frame joints simultaneously. And a multi-scale segmentation strategy is designed to capture the periodic and local features of the gait. To fully exploit the temporal information of gait motion, we design a fusion temporal convolution (FTC) to aggregate temporal information at different scales and motion information. Experiments on the popular CASIA-B gait dataset and OUMVLP-Pose dataset show that our method outperforms most existing skeleton-based methods, verifying the effectiveness of the proposed modules.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call