Abstract

Gait recognition has attracted widespread attention in recent years, as the gait of humans has a strong discriminative ability even under low-quality image sequences captured at a distance. This paper describes a gait recognition algorithm by combining shallow and deep features through multiple walking views. For each walking sequence, the binary silhouettes are characterized with width features as shallow parameters, including arm silhouette widths and leg silhouette widths, which can implicitly reflect the temporal changes of silhouette shape. In addition, using deep transfer learning algorithm, deep gait features can be represented as the spatial variation of pixel points in the human silhouette, which can represent the spatial changes among silhouette images. Both shallow and deep features can be used separately for gait recognition using Support Vector Machine classifier (SVM). They are fused on the decision level to improve the gait recognition performance. The fusion of two different kinds of features provides a comprehensive characterization of gait dynamics, which is not sensitive to the walking views variation. CASIA-B dataset is used to perform the proposed method. A better performance of this method compared to the related work methods has been shown after experimental result in terms of rate accuracy of identification and classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call