Abstract
Conventional unsupervised domain adaptation (UDA) and domain generalization (DG) methods rely on the assumption that all source domains can be directly accessed and combined for model training. However, this centralized training strategy may violate privacy policies in many real-world applications. A paradigm for tackling this problem is to train multiple local models and aggregate a generalized central model without data sharing. Recent methods have made remarkable advancements in this paradigm by exploiting parameter alignment and aggregation. But when sources domain variety increases, directly aligning and aggregating local parameters becomes more challenging. Adapting a different approach in this work, we devised a data-free semantic collaborative distillation strategy to learn domain-invariant representation for both federated UDA and DG. Each local model transmits its predictions to the central server and derives its target distribution from the average of other local models' distributions to facilitate the mutual transfer of domain-specific knowledge. When unlabeled target data is available, we introduce a novel UDA strategy termed knowledge filter to adapt the central model to the target data. Extensive experiments on four UDA and DG datasets demonstrate that our method has a competitive performance compared with the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.