Abstract

Unsupervised domain adaptation (UDA) techniques for person re-identification (ReID) have been extensively studied to facilitate the transfer of knowledge from labeled source domains to unlabeled target domains. However, the need to access the source data raises privacy concerns in real-world scenarios. To overcome this limitations, source-free domain adaptation (SFDA) was introduced, enabling adaptation without requiring access to the source data, relying instead on a well-trained source model. Nevertheless, existing SFDA methods assume a shared label space and overlook the significance of domain-style discrepancies in person ReID, limiting their applicability to source-free domain adaptive person ReID. In this paper, we present a novel approach called Source-free Style-diversity Adversarial Domain Adaptation with Privacy-preservation (S2ADAP) for person ReID to address these challenges. Our approach effectively handles inter-domain pedestrian appearance style differences using GAN-based domain-style diversity augmentation and intra-domain individual style misalignment through adversarial mutual teaching learning, avoiding access to data from the source domain. We leverage a pre-trained model as a person appearance style encoder to enhance source-similar style diversity in the target domain and achieve intra-domain individual style alignment by introducing the domain style discriminator to promote the discriminability of person semantic features for domain adaptation. The experimental results on publicly available person ReID datasets affirm the efficacy of our approach, offering a promising and privacy-preserving solution for person ReID tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call