Abstract

View variation is one of the greatest challenges in the field of gait recognition. Subspace learning approaches are designed to solve this issue by projecting cross-view features into a common subspace before recognition. However, similarity measures are data-dependent, which results in low accuracy when cross-view gait samples are randomly arranged. Inspired by the recent developments of data-driven similarity learning and multi-nonlinear projection, we propose a new unsupervised projection approach, called multi-nonlinear multi-view locality-preserving projections with similarity learning (M2LPP-SL). The similarity information among cross-view samples can be learned adaptively in our M2LPP-SL. Besides, the complex nonlinear structure of original data can be well preserved through multiple explicit nonlinear projection functions. Nevertheless, its performance is largely affected by the choice of nonlinear projection functions. Considering the excellent ability of kernel trick for capturing nonlinear structure information, we further extend M2LPP-SL into kernel space, and propose its multiple kernel version MKMLPP-SL. As a result, our approaches can capture linear and nonlinear structure more precisely, and also learn similarity information hidden in the multi-view gait dataset. The proposed models can be solved efficiently by alternating direction optimization method. Extensive experimental results over various view combinations on the multi-view gait database CASIA-B have demonstrated the superiority of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call