Abstract

Gait recognition is a biometric technology used to identify people from their walking patterns. Most traditional Computer Vision-based gait recognition approaches have been tested to work satisfactorily given similar training and test conditions. However, there may be situations where the two conditions differ due to the presence of co-variate conditions like bag, coat, and/or others. The presence of co-variate conditions alters the silhouette shape of an individual which affects the performance of the gait recognition algorithms. In this work, we develop an automated approach to perform gait recognition satisfactorily even in the presence of co-variate conditions. First, we determine a set of generic unique poses in a gait cycle, following which we compute gait features corresponding to these poses, which we term as the Dynamic Gait Energy Image (DGEI). Next, a Generative Adversarial Network (GAN) model is employed to predict the corresponding DGEI images without the co-variates. These final gait features are readily comparable with the gallery sequences and, hence, the final recognition is performed using the GAN-generated DGEI images. Extensive experimental studies using the publicly available CASIA B, TUM-GAID, and OU-ISIR TreadMill B data sets verify the effectiveness of the proposed approach and its superiority over the state-of-the-art techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call