Abstract
Person re-identification is an important problem for associating behavior of people monitored in surveillance camera networks. The fundamental challenges of person re-identification are the large appearance distortions caused by view angles, illumination and occlusions. To address these challenges, a method is proposed in this paper to enhance person re-identification by integrating gait biometric. The proposed framework consists of the hierarchical feature extraction and descriptor matching with learned metric matrices. Considering the appearance feature is not discriminative in some cases, the feature in this work composes of the appearance features and the gait feature for shape and temporal information. In order to solve the view-angle change problem and measuring similarity, data are mapped into a metric space so that distances between people can be measured more accurately. Then two fusion strategies are adopted. The score-level fusion computes distances on the appearance feature and the gait feature, respectively, and combine them as the final distance between samples. The feature-level fusion firstly installs two types of features in series and then computes distances by the fused feature. Finally, our method is tested on the CASIA gait dataset. Experiments show that integrating gait biometric is an effective way to enhance person re-identification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.