Abstract

Deep learning is reaching state of the art in many applications. However, the generalization capabilities of the learned networks are limited to the training or source domain. The predictive power decreases when these models are evaluated in a target domain different from the source domain. Joint adversarial domain adaptation networks are currently the preferred models for source-to-target domain adaptation due to their good empirical performance. These models simultaneously learn a classifier, an invariant representation through an adversarial min–max game, and adapt local structures between domains. For the latter, it is common practice to incorporate pseudo labels that can be, however, unreliable due to false predictions on challenging tasks. This work proposes the Domain Adversarial Tangent Subspace Alignment (DATSA) network, which models data as affine subspaces and adversarially aligns local approximations of manifolds across domains. DATSA addresses the drawbacks of the joint adversarial domain adaptation networks by not requiring pseudo labels for local alignment because it relies on self-supervised learning for subspace alignment. Additionally, DATSA adaptations are explainable to some extent and the results show that they are competitive to other models in terms of accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call