Abstract

ABSTRACTFederated learning enables a large number of clients (such as edge computing devices) to learn a model jointly without data sharing. However, the high amount of communication of the federated learning aggregation algorithm hinders the realisation of artificial intelligence in the last mile. Although FederatedAveraging (FedAvg) is the leading algorithm, its communication cost is still high. The method of communication delay and gradient sparse can reduce the communication cost, but there is no previous work to analyse the relationship and common effects of these two dimensions. Aiming at the problems that federated learning communication is expensive and it has become a training bottleneck, we improve the FedAvg algorithm and propose an adaptive communication frequency FederatedAveraging algorithm (AFedAvg). The gradient sparse operation in the algorithm reduces the quantity of parameters for a single communication, while the communication delay operation allows training to converge faster and obtain smaller losses. The number of sparse parameters is used to select the communication frequency of next round dynamically. Experimental results prove that, the AFedAvg algorithm is superior to the FedAvg and its variants in terms of communication cost. It achieves 2.4X–23.1X communication compression in different data distributions with minimal communication rounds required by the algorithm to converge.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.