Abstract
This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.
Highlights
Automatic person identification is an important task in visual surveillance, and monitoring applications in securitysensitive environments such as airports, banks, malls, parking lots, and large civic structures, and biometrics such as iris, face, and fingerprint have been researched extensively for this purpose
As pointed out in [22], since the shape parameters are largely affected by cloths and the silhouette extraction algorithm used, they are not considered as gait dynamics for practical automatic model-based gait recognition, which is to be shown in the experiments (Section 5)
The experiments on layered deformable model (LDM)-based gait recognition were carried out on the manual silhouettes created in [16] and the corresponding subset in the original “gait challenge” data sets, which contains human gait sequences captured under various outdoor conditions
Summary
This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic modelbased gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. The LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The LDM can serve as an analysis tool for studying factors affecting the gait under various conditions
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.