Abstract

Person re-identification problems usually suffer from large subject appearance variations and limited training data. This paper proposes a novel physically motivated Color/Illuminance-Aware data-augmentation (CIADA) scheme and a style-adaptive fusion approach to address these issues. The CIADA scheme estimates the color/illuminance distribution from the training data via manifold learning and generates new samples under different color/illuminance perturbations to better capture objects’ appearance for mitigating the small-sample-size and color variation problems. A Color/Illuminance Aware Feature Augmentation (CIAFA) approach, which is applicable to state-of-the-art features and metric learning algorithms, is then proposed to integrate the features generated by the augmented samples for metric learning. A new Color/Illuminance-Aware Style Fusion (CIASF) scheme, which allows the learning and matching process to be performed independently on each pair of datasets generated for estimating a set of ‘local’ distance functions, is also proposed. A canonical correlation analysis-based weighting scheme is developed to fuse these local distances to an overall distance for recognition. This reduces the memory requirement and complexity over the original CIAFA. Experiments on common datasets show that the proposed methodologies substantially improve the performance of state-of-the-art subspace learning algorithms. It is applicable to both small and large datasets with hand-craft and deep features.

Highlights

  • Pedestrian recognition across multiple cameras, or, person re-identification (Person Re-id), has been extensively studied in the past decade due to the rapid deployment of large-scale video-based surveillance networks for social security and other applications

  • We can see that the proposed Color/Illuminance-Aware data-augmentation (CIADA) approach effectively improves the recognition accuracy of state-of-the-art metric learning algorithms

  • For VI-PeR, the Color/Illuminance-Aware Style Fusion (CIASF) further boosts the performance of Color/Illuminance Aware Feature Augmentation (CIAFA) by another 3~5% on both metric learning methods

Read more

Summary

Introduction

Pedestrian recognition across multiple cameras, or, person re-identification (Person Re-id), has been extensively studied in the past decade due to the rapid deployment of large-scale video-based surveillance networks for social security and other applications. Despite the increasing amount of published literatures, it remains challenging due to large variations across camera views and limited availability of object data. The large variation due to view-specific illumination/color (VSIC) halts the improvement of feature extraction techniques and recognizers. The sample size of commonly used datasets is usually small as compared with the feature dimension, which limits the usage of more complex models. Given that we need to distinguish each subject from all the others, it is a recognition problem with many classes but with few sample size [1]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.