Abstract

Machine learning in small-scale unmanned aerial vehicle (UAV) deployment fails to accomplish complex missions of swarm intelligence. Federated learning (FL) tackles it by being able to handle large-scale learning, while keeping data private. The client data distribution involved in FL is often non-independent and identically distributed (Non-IID) and the amount of data is unbalanced. This may seriously affect the model’s convergence speed, and may even lead to nonconvergence. To tackle the above problems, we propose a new asynchronous FL scheme that combines adjusting client training. The scheme determines the weight of aggregation based on the staleness of client model parameters and the amount of data owned by the client, and dynamically adjusts the local training of each client using average training loss. Experiments on public datasets verify the effectiveness of our method, which can alleviate the impact of data Non-IID and data imbalance in asynchronous FL and accelerate the training of FL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call