Abstract

Cross-domain person re-identification is a technique for identifying the same individual across different cameras or environments that necessitates the overcoming of challenges posed by scene variations, which is a primary challenge in person re-identification and a bottleneck for its practical applications. In this paper, we learn the invariance model of cross-domain feature fusion in a labeled source domain and an unlabeled target domain. First, our method learns the global and local fusion features of a person in the source domain by means of supervised learning with no component label and only person identification and obtains the fusion features of the person in the source and target domains by means of unsupervised learning. Based on person fusion features, this paper introduces feature memory to store the fused target features and designs a cross-domain invariance loss function to improve the cross-domain adaptability of the person. Finally, this paper carries out cross-domain person re-identification verification experiments between the Market-1501 and DukeMTMC-reID datasets; the experimental results show that the proposed method achieves significant performance improvement in cross-domain person re-identification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.