Abstract

Recently, a novel machine learning technique, federated learning, attracts ever-increasing interests from academia to industry. The main idea of federated learning is to collaboratively train a global optimal machine learning model among all the participants. During the process of parameter updating, the communication cost of the system or network can be extremely huge with a large number of iterations and participants. Although the edge computing paradigm can decrease the latency to a certain extent, how to obtain further delay reduction is still a challenge. Therefore, to address the problem, we firstly model the corresponding problem into a finite-sum optimization problem. Then, we propose a federated stochastic variance reduced gradient based method to decrease the number of iterations between the participants and server from the system perspective, and guarantee the accuracy at the same time. Meanwhile, the corresponding convergence analysis is provided. Finally, we test our proposed method on the linear regression problem and the logistic regression problem. The simulation results show that our proposed method can reduce the communication cost significantly compared with general stochastic gradient descent based federated learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call