Abstract

We tackle the problem of generalizing a predictor trained on a set of source domains to an unseen target domain, where the source and target domains are different but related to one another, i.e., the domain generalization problem. Prior adversarial methods rely on solving the minimax problems to align in the neural network embedding space the components of the domains (i.e., a set of marginal distributions, a set of marginal distributions and multiple sets of class-conditional distributions). However, these methods introduce additional parameters (for each set of distributions) to the network predictor and are difficult to train. In this work, we propose to directly align the domains themselves via solving a minimax problem that can be decomposed and converted into a min one. Particularly, we analytically solve the max problem with respect to (w.r.t.) the domain discriminators, and convert the minimax problem into a min one w.r.t. the embedding function. This is more advantageous since in the end our approach introduces no additional network parameters and simplifies the training procedure. We evaluate our approach on several multi-domain datasets and testify its superiority over the relevant methods. The source code is available at https://github.com/sentaochen/Decomposed-Adversarial-Domain-Generalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call