Abstract

Federated learning (FL) is a novel distributed machine learning paradigm used to solve data sharing and privacy security in today's academic community. As FL edge devices increasingly participate in collaborative computing, improving communication efficiency while ensuring the accuracy of the model turns out to be a pressing challenge. However, it is difficult for existing FL frameworks to make full use of the contribution of edge devices onto the global model in each model aggregation process since computing capacity and communication environment of edge devices have diverse characteristics. In this paper, we propose an efficient down sampling strategy based on a proposed matching factor for FL edge devices selection to achieve better global model accuracy with less selected FL edges. In our scheme, we make use of the observation that the model can converge, and the data characteristics within the same data source have autocorrelation implicitly. Our down sampling scheme based on the matching factor is evaluated on a popular data set CIFAR10. The experimental results show that this scheme can improve the prediction accuracy, significantly save the communication cost and computing cost, and improve the utilization of edge devices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call