Abstract

Federated Learning (FL) can train models in an edge environment without sending raw data. However, the performance is still constrained by data heterogeneity. To address the problems of data heterogeneity and resource scarcity in edge devices, we propose Federal Learning via Dynamic Aggregation (FedDA), which eliminates the influence of data heterogeneity and improves model accuracy. FedDA updates the impact of individual local models on the global model in real-time at different stages. It adjusts the local epoch in each round to prevent the device from dropping out while obtaining a more accurate local model. The core module is the model impact factor (MIF) that inscribes the aggregation weights to solve the impact of fixed weights on the aggregation model with improper extraction of local information. We conducted several experiments to evaluate the convergence speed using different algorithms on the MINIST. FedDA consistently outperforms the other six SOTA algorithms on MNIST, Cifar10, and Cifar100 datasets. In significant data heterogeneity, FedDA improves accuracy by up to 6% over the different algorithms and at least about 3%, especially in resource-scarce environments. To reach the specified accuracy, FedDA is 3 times faster than SCAFFOLD and at least 50% faster than other algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.