Decoupling Domain Invariance and Variance With Tailored Prompts for Open-Set Domain Adaptation
Open-set domain adaptation (OSDA) aims to transfer a model from a labeled source domain to an unlabeled target domain which contains novel categories. Although existing OSDA works indeed aim to align features within the same category while discriminating unknowns, their alignment on domain-variant features has not fundamentally eliminated domain bias. To tackle this problem, we propose Decoupling Domain Invariance and Variance with Tailored Prompts (PromptDIV) for OSDA to learn domain-invariant features for alignment. Specifically, we propose One-vs-All Clustering with Text Features (OVAT) to offer domain-unbiased pseudo-labels, Domain-Specific Prompts (DSPs) to decouple domain-invariant and domain-variant features, and Semisupervised and Affinity Contrastive Learning (SMACL) to strengthen the consistency of features within the same category. Extensive experiments on two benchmarks verify that our PromptDIV achieves state-of-the-art performance.