Abstract

Gait recognition, i.e., identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, their performance tend to suffer drastically with variations in clothing and carrying conditions. In this paper, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2-D spatio-temporal template from video sequence, called average energy silhouette image (AESI). Zernike moment invariants are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from spatial distribution of oriented gradients and novel mean of directional pixels methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets, i.e., CASIA Dataset B, OU-ISIR Treadmill Dataset B, and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.