Abstract

Source-free unsupervised domain adaptation is one class of practical deep learning methods which generalize in the target domain without transferring data from source domain. However, existing source-free domain adaptation methods rely on source model transferring. In many data-critical scenarios, the transferred source models may suffer from membership inference attacks and expose private data. In this paper, we aim to overcome a more practical and challenging setting where the source models cannot be transferred to the target domain. The source models are considered as queryable black-box models which only output hard labels. We use public third-party data to probe the source model and obtain supervision information, dispensing with transferring source model. To fill the gap between third-party data and target data, we further propose Distributionally Adversarial Training (DAT) to align the distribution of third-party data with target data, gain more informative query results and improve the data efficiency. We call this new framework Black-box Probe Domain Adaptation (BPDA) which adopts query mechanism and DAT to probe and refine supervision information. Experimental results on several domain adaptation datasets demonstrate the practicability and data efficiency of BPDA in query-only and source-free unsupervised domain adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call