Abstract

Even if it is not always stated explicitly, the majority of recent approaches to domain adaptation are based on theoretically unjustified assumptions, on the one hand, and (often hidden) inductive biases on the other hand. This paper highlights that achieving feature alignment, which is commonly assumed to minimize theoretical upper bounds on risk in the target domain, does not guarantee low target risk. Furthermore, through a series of experiments, this paper reveals that deep domain adaptation methods heavily rely on hidden inductive biases present in common practices, including model pretraining and encoder architecture design. In a third step, the paper argues that using handcrafted priors might not be sufficient to bridge distant domains: powerful parametric priors can be instead learned from data, leading to a large improvement in target accuracy. This paper proposes a meta-learning strategy for discovering inductive biases that effectively solve specific domain transfers. It outperforms handcrafted priors on several image classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call