Abstract

View variation is one of the greatest challenges faced by the gait recognition research community. Recently, there are studies that model sets of gait features from multiple views as linear subspaces, which are known to form a special manifold called the Grassmann manifold. Conjecturing that modeling via linear subspace representation is not completely sufficient for gait recognition across view change, we take a step forward to consider non-linear subspace representation. A collection of multi-view gait features encapsulated in the form of a linear subspace is projected to the non-linear subspace through the expansion coefficients induced by kernel principal component analysis. Since subspace representation is inherently non-Euclidean, naïve vectorization as input to the vector-based pattern analysis machines is expected to yield suboptimal accuracy performance. We deal with this difficulty by embedding the manifold in a Reproducing Kernel Hilbert Space (RKHS) through a positive definite kernel function defined on the Grassmann manifold. A closer examination reveals that the proposed approach can actually be interpreted as a doubly-kernel method. To be specific, the first kernel maps the linear subspace representation non-linearly to a feature space; while the second kernel permits the application of kernelization-enabled machines established for vector-based data on the manifold-valued multi-view gait features. Experiments on the CASIA gait database shows that the proposed doubly-kernel method is effective against view change in gait recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call