Abstract

In recent years, the field of person re-identification has made significant advances riding on the wave of deep learning. However, owing to the fact that there are much more easy examples than those meaningful hard examples in a dataset, the training tends to stagnate quickly and the model may suffer from over-fitting, which leads to some error matching of models especially for some hard samples during the test process. Therefore, the hard sample mining method is fateful to optimize the model and improve the learning efficiency. In this paper, an Adaptive Hard Sample Mining algorithm is proposed for training a robust person re-identification model. No need for hand-picking the images in the batch or designing the loss function for both positive and negative pairs, we can briefly calculate the hard level by comparing the prediction result with the true label of the sample. Meanwhile, taking into account the change in the number of samples required for the model during training process, an adaptive threshold of hard level can make the algorithm not only stay in step with training process harmoniously but also alleviate the under-fitting and over-fitting problem simultaneously. Besides, the designed network to implement the approach is very efficient and has good generalization performance that can be combined with various existing models readily. Experimental results on Market-1501, DukeMTMC-reID and CUHK03 datasets clearly demonstrate the effectiveness of the proposed algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.