Abstract

The advances of both computation and communication technologies facilitate the exploitation of massive data generated by mobile devices. It is attractive to leverage these data and computation resources to train high-performance machine learning (ML) models. In traditional ML methods, all data are uploaded to servers for model training, which incur issues of large overhead and privacy concerns. Federated learning (FL) was proposed to address these issues. In this paper, we propose FedPCC, an efficient approach based on the parallelism of communication and computation among devices for FL in wireless networks. FedPCC considers the difference in communication and computation capabilities between different devices and optimizes the training schedule for devices selected in each round. Instead of dividing an FL training round into separate communication and computation steps, FedPCC protocol arranges devices to download the global model sequentially and then start local model training immediately once finishing downloading, which allows better utilization of communication and computation resources. Specifically, we formulate an optimization problem to minimize the time for each training round and propose an algorithm to solve the problem based on the trade-off of devices’ communication and computation capabilities. FedPCC uses a heuristic algorithm to make slow devices start relatively earlier, which can shorten the time in one training round, thus reducing the entire training time. We conduct extensive experiments to demonstrate that the proposed protocol outperforms the existing protocols under different system settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call