Abstract

Due to the existence of a domain gap between different domains, when a model trained on one domain is applied to other domain, performance will drop dramatically. For the moment, some of the solutions are concentrating on reducing data distribution discrepancy in different domains, but they ignore unlabeled samples in the target domain. To address this problem, we propose the cross-view similarity exploration (CVSE) method, which combines style-transferred samples to optimize the CNN model and the relationship between samples. It mainly includes two stages. In stage-I, we use starGAN to train a style transfer model, which generates images of multiple camera styles for increasing the quantity and diversity of samples. In stage-II, we propose incremental optimization learning, which iterates between similarity grouping and CNN model optimization to progressively explore the potential similarities of all training samples. Furthermore, with the purpose of reducing the impact of label noise on performance, we propose a new ranking-guided triplet loss, which is on the basis of similarity and does not require any label to select reliable triple samples. We perform a mass of experiments on Market-1501, and DukeMTMC-reID datasets prove that the proposed CVSE is competitive to the most advanced methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call