Abstract
Domain generalization (DG) aims to generalize a model trained on multiple source domains to an unseen target domain with different distributions. An effective approach in DG is to generate samples of novel domain properties by data augmentation, which extends the representation space of source domains and enables the model to learn semantic invariant representations across domains. However, existing methods usually underappreciate the inherent intra-domain style invariance within each domain when synthesizing new domains, resulting in limited diversity of augmented data. We introduce an extremely intuitive perspective of multi-domain data augmentation, which can generate style-diversified features with other domain styles via the Multi-Domain Feature Stylization (MDFS) module, extending the style representations of the source domains. By incorporating stylized features in the training, the model is encouraged to learn automatically robust representations against domain shift. Nevertheless, the out-of-domain styles in the stylized features more susceptible lead to the entanglement of style and semantic representations. In this paper, we propose a novel style-semantic contrastive loss to further disentangle correlations between the above two types of representations in the latent feature space. Moreover, to robust the semantic consistency of original and stylized features, we utilize the semantic consistency regularization to maintain the consistency of the model in terms of predicted probabilities. Extensive experiments and analysis on three benchmark datasets show that our proposed method outperforms the state-of-the-art methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have