Abstract

Unsupervised domain adaptation has achieved great progress in the past few years. Nevertheless, most existing methods work in the so-called closed-set scenario, assuming that the classes depicted by the target samples are exactly the same as those of the source domain. In this paper, we tackle the more challenging scenario of open set domain adaptation with a novel end-to-end training approach, where the samples of unknown class can be present in the target domain. Our method employs entropy minimization for performing unsupervised domain adaptation, where unknown samples are aggressively used in training by forcing the classifier to output the probability of 0.5 on the unknown class. Experimental evidence demonstrates that our approach significantly outperforms the state-of-the-art in open set domain adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call