Abstract
Domain generalization aims to generalize a network trained on multiple domains to unknown yet related domains. Operating under the assumption that invariant information generalizes well to unknown domains, previous work has aimed to minimize the discrepancies amongst distributions across given domains. However, without prior regularization of feature distributions, the network in practice overfits the invariant information in the given domains. Moreover, if there are insufficient samples in given domains, then domain generalizability is limited, as diverse domain variations are not captured. To address these two drawbacks, we propose to explicitly map features in known and unknown domains onto latent space in a fixed Gaussian mixture distribution by variational coding. As a result, features in different classes follow Gaussian distributions with different mean values. The predefined latent space narrows discrepancies between known and unknown domains and effectively separates samples into different classes. Moreover, we propose to perturb sample features with gradients from the distribution regularized loss. This perturbation generates samples beyond but near the latent space of prior distributions, which has a profound impact on domain variations. Experiments and visualizations demonstrate the effectiveness of our proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.