Abstract

A crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria.

Highlights

  • Most research in deep learning assumes that models are trained and tested from a fixed distribution

  • Contributions In this paper, we aim to address these theoretical problems in the representation learning-based domain generalization

  • (1) we reveal the limitation of representation learning in domain generalization through barely ensuring invariance criteria, which can lead to a over-matching on the observed environments. i.e: the complex or non-smooth representation function can even be vulnerable to the small distribution shift

Read more

Summary

Introduction

Most research in deep learning assumes that models are trained and tested from a fixed distribution. The prediction performance can dramatically degrade in other regions with the same object but different environmental backgrounds. To this end, domain generalization is recently proposed and further analyzed to alleviate the prediction gap between the observed training ( S ) and unseen test ( T ) dataset. ST ) is an auxiliary task to ensure the invariance among the observable source environments, which can have various forms: Marginal feature invariance Ganin et al (2016) aims at enforcing x1∼S1(x)[ (x1)] = xt∼S2(x)[ (x2)] = ⋯ = xT ∼ST (x)[ (xT )], ∀t ∈ {1, ...

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call