Abstract

Domain generalization aims to reduce the vulnerability of deep neural networks in the out-of-domain distribution scenario. With the recent and increasing data privacy concerns, federated domain generalization, where multiple domains are distributed on different local clients, has become an important research problem and brings new challenges for learning domain-invariant information from separated domains. In this paper, we address the problem of federated domain generalization from the perspective of domain hallucination. We propose a novel federated domain hallucination learning framework, with no additional data exchange between clients other than model weights, based on the idea that a domain hallucination with enlarged prediction uncertainty for the global model is more likely to transform the samples into an unseen domain. These types of desired domain hallucinations are achieved by generating samples that maximize the entropy of the global model and minimize the cross-entropy of the local model, where the latter loss is further introduced to maintain the sample semantics. By training the local models with the learned domain hallucinations, the final model is expected to be more robust to unseen domain shifts. We perform extensive experiments on three object classification benchmarks and one medical image segmentation benchmark. The proposed method outperforms state-of-the-art methods on all the benchmarks, demonstrating its effectiveness.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.