Abstract

View Transformation Model(VTM) is a widely used method to solve the multi-view problem in gait recognition. But accuracy loss always occurs during the view transformation procedure, especially when the difference of viewing angles between two gait features grows. On one hand, faced with this difficulty, 2D Enhanced GEI(2D-EGEI) is proposed to extract effective gait features by using the reconstruction of 2DPCA. On the other hand, Nonnegative Matrix Factorization(NMF) is adopted to learn local structured features for supplying accuracy loss. Moreover, 2D Linear Discriminant Analysis(2DLDA) is introduced to project features into a discriminant space to improve classification ability. Compared with two deep learning methods, experimental results prove that the proposed method significantly outperforms the Stack Aggressive Auto-Encoder(SPAE) method, and could get close to the deep CNN network method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call