Abstract

Gait has emerged as an important biometric feature which is capable of identifying individuals at distance without requiring any interaction with the system. Various factors such as clothing, shoes, and walking surface can affect the performance of gait recognition. However, cross-view gait recognition is particularly challenging as the appearance of individual’s walk drastically changes with the change in the viewpoint. In this paper, we present a novel view-invariant gait representation for cross-view gait recognition using the spatiotemporal motion characteristics of human walk. The proposed technique trains a deep fully connected neural network to transform the gait descriptors from multiple viewpoints to a single canonical view. It learns a single model for all the videos captured from different viewpoints and finds a shared high-level virtual path to project them on a single canonical view. The proposed deep neural network is learned only once using the spatiotemporal gait representation and applied to testing gait sequences to construct their view-invariant gait descriptors which are used for cross-view gait recognition. The experimental evaluation is carried out on two large benchmark cross-view gait datasets, CASIA-B and OU-ISIR large population, and the results are compared with current state-of-the-art methods. The results show that the proposed algorithm outperforms the state-of-the-art methods in cross-view gait recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call