Abstract

Conventional machine learning models are often vulnerable to samples with different distributions from the ones of training samples, which is known as domain shift. Domain Generalization (DG) challenges this issue by training a model based on multiple source domains and generalizing it to arbitrary unseen target domains. In spite of remarkable results made in DG, a majority of existing works lack a deep understanding of the feature representations learned in DG models, resulting in limited generalization ability when facing domainsout-of-distribution. In this paper, we aim to learn a domain transformation space via a domain transformer network (DTN) which explicitly mines the relationship among multiple domains and constructs transferable feature representations for down-stream tasks by interpreting each feature as a semantically weighted combination of multiple domain-specific features. Our DTN is encouraged to meta-learn the properties and characteristics of domains during the training process based on multiple seen domains, making transformed feature representations more semantical, thus generalizing better to unseen domains. Once the model is constructed, the feature representations of unseen target domains can also be inferred adaptively by selectively combining the feature representations from the diverse set of seen domains. We conduct extensive experiments on five DG benchmarks and the results strongly demonstrate the effectiveness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call