Abstract

Gait recognition, i.e., identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, their performance tend to suffer drastically with variations in clothing and carrying conditions. In this paper, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2-D spatio-temporal template from video sequence, called average energy silhouette image (AESI). Zernike moment invariants are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from spatial distribution of oriented gradients and novel mean of directional pixels methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets, i.e., CASIA Dataset B, OU-ISIR Treadmill Dataset B, and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call