Abstract
Federated Learning (FL) is a distributed machine learning technique that allows numerous Internet of Things (IoT) devices to jointly train a machine learning model using a centralized server for help. Local data never leaves each IoT device in FL, so the local data of IoT devices are protected. In FL, distributed IoT devices usually collect their local data independently, so the dataset of each IoT device may naturally form a distinct source domain. In real-world applications, the model trained over multi-source domains may have poor generalization performance on unseen target domains. To address this issue, we propose FedADG to equip federated learning with domain generalization capability. FedADG employs the federated adversarial learning approach to measure and align the distributions among different source domains via matching each distribution to a reference distribution. The reference distribution is adaptively generated (by accommodating all source domains) to minimize the domain shift distance during alignment. Therefore, the learned feature representation tends to be universal, and thus, it has good generalization performance over the unseen target domains while protecting local data privacy. Intensive experiments on various datasets demonstrate that FedADG has comparable performance with the state-of-the-art.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.