Abstract

Currently, majority of person re-identification (reID) technologies are network-constrained by Dropout regularization, which relies on the random zeroing out of some features to make these features more independent. However, such random zeroing regularization methods are not very effective for improving network performance, because they neglect the unique contribution of different features to the network performance. To improve the value of indiscriminative features in network training, a DropEasy-based person reID method is proposed in this paper. Features are classified into discriminative and indiscriminative ones, according to the distance between the feature vectors of positive or negative sample pairs wherein the discriminative features are zeroed out, while the indiscriminative features are reserved, and the network only learns through indiscriminative features. Furthermore, because networks are always inclined to make up for incomplete information by drawing on the surrounding features in the feature maps, Dropout loses its effectiveness for the network-constraints. To solve this challenge, the DropEasy2d method that can be effectively applied to convolutional layers is further proposed in this paper. DropEasy2d searches discriminative feature areas in the feature maps by sliding and zeroing windows while reserving the indiscriminative features areas to constrain network learning. The effectiveness of the proposed method is demonstrated using the Market-1501, DuckMTMC-reID, and CUHK03 datasets. For example, in the Market-1501 dataset, DropEasy can improve the mean average precision (mAP) and Rank-1 accuracy of the ID-discriminative embedding (IDE) to 72.7%(+8.8%) and 90.5%(+6.8%), respectively, while DropEasy2d can raise them to 68.5%(+4.6%) and 88.7%(+5.0%), respectively. Based on the results, the proposed method can improve the network performance during the extraction and generalization of the discriminative features.

Highlights

  • Person re-identification is a crucial branch of computer vision that is typically used for the person-matching task in a cross-camera scenario

  • It was found that in classic person reID networks, such as SVDNet, DropEasy can increase the mean average precision (mAP) and Rank-1 accuracy by 10.3% and 12.4%, respectively, which seems more effective than random erasing

  • Regularization methods based on random zeroing out cannot effectively improve the network performance with regards to recognizing the indiscriminative features

Read more

Summary

INTRODUCTION

Person re-identification (reID) is a crucial branch of computer vision that is typically used for the person-matching task in a cross-camera scenario. Fan et al [26], using the softmax function, obtain the inner product by multiplying the norm of the vectors by the cosine value; a margin was added to the cosine value to effectively expand the inter-class distance Both the feature-learning-based methods and the metric-learning-based methods use regularization to generalize networks. It is important to develop a method for converting indiscriminative features into discriminative features To address this challenge, person reID based on the DropEasy method is proposed in this paper. The proposed method improved the network effectiveness in extracting and generalizing the discriminative features. By sliding the window to find the rectangular areas with the discriminative features on the feature map, zeroing them out, and retaining the areas with indiscriminative features, DropEasy2d can effectively improve the network learning and generalization. In comparison with other regularization methods based on random zero out, it can help networks to extract more discriminative features

RELATED WORKS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.