Abstract
Single domain generalization aims to train a model that can generalize well to multiple unseen target domains by leveraging the knowledge in a related source domain. Recent methods focus on synthesizing domains with new styles to improve the diversity of training data. However, mainstream methods rely heavily on an additional generative model when generating augmented data, which increases optimization difficulties and is not conducive to generating diverse style data. Moreover, these methods do not sufficiently capture the consistency between the generated and original data when learning feature representations. To address these issues, we propose a novel single domain generalization method, namely DAI, which improves Diversity And Invariance simultaneously to boost the generalization capability of the model. Specifically, DAI consists of a style diversity module and a representation learning module optimized in an adversarial learning manner. The style diversity module uses a generative model, nAdaIN, to synthesize the data with significant style shifts. The representation learning module performs object-aware contrastive learning to capture the invariance between the generated and original data. Furthermore, DAI progressively synthesizes multiple novel domains to increase the style diversity of generated data. Experimental results on three benchmarks show the superiority of our method against domain shifts.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.