Abstract
Machine learning in small-scale unmanned aerial vehicle (UAV) deployment fails to accomplish complex missions of swarm intelligence. Federated learning (FL) tackles it by being able to handle large-scale learning, while keeping data private. The client data distribution involved in FL is often non-independent and identically distributed (Non-IID) and the amount of data is unbalanced. This may seriously affect the model’s convergence speed, and may even lead to nonconvergence. To tackle the above problems, we propose a new asynchronous FL scheme that combines adjusting client training. The scheme determines the weight of aggregation based on the staleness of client model parameters and the amount of data owned by the client, and dynamically adjusts the local training of each client using average training loss. Experiments on public datasets verify the effectiveness of our method, which can alleviate the impact of data Non-IID and data imbalance in asynchronous FL and accelerate the training of FL.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.