Abstract
AbstractIn deep neural networks, performance can degrade when test data distributions differ from training data. Unsupervised Domain Generalization (UDG) aims to improve generalization across unseen domains by leveraging multiple source domains without supervision. Traditional methods focus on extracting domain‐invariant features, potentially at the expense of feature space integrity and generalization potential. We presents a Multi‐Domain Representation Network (MDRN) for unsupervised multi‐domain learning. MDRN innovates by disentangling and preserving both domain‐invariant and domain‐specific features through an unsupervised cross‐domain reconstruction task. It employs content encoders for domain‐invariant features and multi‐domain style encoders for domain‐specific characteristics. By merging these features based on domain similarity, MDRN constructs a comprehensive feature space that enhances image reconstruction across domains. Additionally, MDRN integrates domain‐specific classifiers, which learn domain classification and provide weighted fusion of domain‐specific features. This design facilitates effective inter‐domain distance measurement and feature integration. Experiments on PACS and DomainNet show MDRN's superior performance over existing state‐of‐the‐art UDG approaches, highlighting its effectiveness in handling distribution shifts between source and target domains.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.