Abstract

Federated Learning (FL) is a distributed machine learning technique that allows numerous Internet of Things (IoT) devices to jointly train a machine learning model using a centralized server for help. Local data never leaves each IoT device in FL, so the local data of IoT devices are protected. In FL, distributed IoT devices usually collect their local data independently, so the dataset of each IoT device may naturally form a distinct source domain. In real-world applications, the model trained over multi-source domains may have poor generalization performance on unseen target domains. To address this issue, we propose FedADG to equip federated learning with domain generalization capability. FedADG employs the federated adversarial learning approach to measure and align the distributions among different source domains via matching each distribution to a reference distribution. The reference distribution is adaptively generated (by accommodating all source domains) to minimize the domain shift distance during alignment. Therefore, the learned feature representation tends to be universal, and thus, it has good generalization performance over the unseen target domains while protecting local data privacy. Intensive experiments on various datasets demonstrate that FedADG has comparable performance with the state-of-the-art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call