Abstract

Deformation of gait silhouettes caused by different view angles heavily affects the performance of gait recognition. In this paper, a new method based on deterministic learning and knowledge fusion is proposed to eliminate the effect of view angle for efficient view-invariant gait recognition. First, the binarized walking silhouettes are characterized with three kinds of time-varying width parameters. The nonlinear dynamics underlying different individuals’ width parameters is effectively approximated by radial basis function (RBF) neural networks through deterministic learning algorithm. The extracted gait dynamics captures the spatio-temporal characteristics of human walking, represents the dynamics of gait motion, and is shown to be insensitive to the variance across various view angles. The learned knowledge of gait dynamics is stored in constant RBF networks and used as the gait pattern. Second, in order to handle the problem of view change no matter the variation is small or large, the learned knowledge of gait dynamics from different views is fused by constructing a deep convolutional and recurrent neural network (CRNN) model for later human identification task. This knowledge fusion strategy can take advantage of the encoded local characteristics extracted from the CNN and the long-term dependencies captured by the RNN. Experimental results show that promising recognition accuracy can be achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call