Abstract

Gait recognition has become a mainstream technology for identification due to its ability to capture gait features over long distances without subject cooperation and resistance to camouflage. However, current gait recognition methods face challenges as they use a single network to extract both temporal and spatial features from gait sequences. This approach imposes a heavy burden on the network, resulting in reduced extraction efficiency. To solve this problem, we propose a two-branch network to extract the spatio-temporal features of gait sequences. One branch primarily focuses on spatial feature extraction, while the other concentrates on temporal feature extraction. This design can make one branch focus on a specific task, leading to significant performance improvements. For temporal feature extraction, we propose the Global Temporal Information Extraction Network (GTIEN). GTIEN extracts temporal features of gait sequences by sequentially exploring the relationship between adjacent gait silhouettes from pixel and block levels. For spatial feature extraction, we introduce the Selective Horizontal Pyramid Convolution Network (SHPCN). SHPCN explores the multi-granularity features of gait silhouettes from global and local perspectives and assigns them appropriate weights according to their importance. By reasonably combining the temporal features extracted from GTIEN and spatial features extracted from SHPCN, we can effectively learn the spatial–temporal information of the gait sequences. Extensive experiments on CASIA-B and OUMVLP demonstrate that our method has better performance than some state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call