Abstract

Federated Learning (FL) has recently drawn considerable attention, enabling multiple end devices to collaboratively learn global models without collecting device data. In reality, end devices can usually be distributed in non-correlated environments and generate non-IID data, which may lead to the Artificial Intelligence (AI) model weight divergence among devices and model accuracy degradation after aggregation. In this paper, to address the non-IID data problem for FL, we treat this problem among end devices as a distribution adaptation problem among multiple source domains and analyze the feasibility of feature augmentation, and then propose a novel method called Label-wisE Distribution Adaptive Federated Learning (LEDA-FL). First, to reduce the divergence in the label-wise feature space, we integrate the modified Conditional Variational AutoEncoder (CVAE) to align the label-wise feature distributions among clients. Second, we augment the label-wise features for FL clients to improve the FL performance (test accuracy and communication efficiency). Finally, we conduct an extensive experiment on five popular datasets, and the experimental results show that our proposed method improves the test accuracy of the global model (e.g.,6.2% test accuracy improvement on CIFAR100 compared to FedProx) and the communication efficiency of FL (e.g., about 60% reduction in communication cost on CIFAR100 compared to FedProx).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.