Abstract
In this paper, we present a new method to eliminate the effect of view angle for efficient gait recognition via deterministic learning theory. The width of the binarized silhouette models the periodic deformation of human gait silhouettes. It captures the spatio-temporal characteristics of each individual, represents the dynamics of gait motion, and can sensitively reflect the variance between gait patterns across various views. The gait recognition approach consists of two phases: a training phase and a recognition phase. In the training phase, the gait dynamics underlying different individuals' gaits from different view angles are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. In order to address the problem of view change no matter the variation is small or significantly large, the training patters from different views constitute a uniform training dataset containing all kinds of gait dynamics of each individual observed across various views. In the recognition phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of human gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern whose view pattern contained in the prior training dataset, a set of recognition errors are generated. The average L 1 norms of the errors are taken as the similarity measure between the dynamics of the training gait patterns and the dynamics of the test gait pattern. Finally, comprehensive experiments are carried out on the CASIA-B and CMU gait databases to demonstrate the effectiveness of the proposed approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have