Abstract

Recently, active domain adaptation (ADA) has become an emerging paradigm, as it significantly boosts the performance of the model with additional supervision. Despite the impressive performance of ADA methods, they are assumed that the source domain can be available during model adaptation. In many scenarios, the source domain can be unavailable due to issues such as data privacy. Consequently, ADA methods fail to achieve the expected results or even become invalid. To solve the problem, we design an effective framework named Feature Mixing and Self-Training (FMAS) for source-free active domain adaptation. Specifically, we adopt Clustering-based Alignment to make the classification boundary clearer to be beneficial to the subsequent sample selection, which can also achieve initial domain adaptation. When reaching active sample selection, we utilize feature mixing to explore the uncertainty and diversity of the sample. We construct the sample candidate pool by Mixup-based Uncertainty Selection and then consider the diversity to overcome the redundancy among the pool by Entropy-based Diversity Selection. Finally, for the remaining unlabeled samples, we adopt the self-training framework, which screens out high-confidence samples to explore the information of unlabeled samples. Thorough experiments show that FMAS improves average accuracy by at least 1% compared to other methods on benchmark datasets OfficeHome and MiniDomainNet. And FMAS can also achieve better results with ADA methods without the source domain data, which demonstrates FMAS can outperform previous ADA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call