Abstract
As a new machine learning technique, federated learning has received more attention in recent years, which enables decentralized model training across data silos or edge intelligent devices in the Internet of Things without exchanging local raw data. All kinds of algorithms are proposed to solve the challenges in federated learning. However, most of these methods are based on stochastic gradient descent, which undergoes slow convergence and unstable performance during the training stage. In this paper, we propose a differential adaptive federated optimization method, which incorporates an adaptive learning rate and the gradient difference into the iteration rule of the global model. We further adopt the first-order moment estimation to compute the approximate value of the differential term so as to avoid amplifying the random noise from the input data sample. The theoretical convergence guarantee is established for our proposed method in a stochastic non-convex setting under full client participation and partial client participation cases. Experiments for the image classification task are performed on two standard datasets by training a neural network model, and experiment results on different baselines demonstrate the effectiveness of our proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.