Abstract

Open Set Domain Adaptation (OSDA) focuses on bridging the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source as unknown. The challenges of this task are closely related to those of Positive-Unlabeled (PU) learning where it is essential to discriminate between positive (known) and negative (unknown) class samples in the unlabeled target data. With this newly discovered connection, we leverage the theoretical framework of PU learning for OSDA and, at the same time, we extend PU learning to tackle uneven data distributions. Our method combines domain adversarial learning with a new non-negative risk estimator for PU learning based on self-supervised sample reconstruction. With experiments on digit recognition and object classification, we validate our risk estimator and demonstrate that our approach allows reducing the domain gap without suffering from negative transfer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call