Abstract
Abstract Federated learning (FL) is an emerging privacy-preserving technology for machine learning, which enables end devices to cooperatively train a global model without uploading their local sensitive data. Because of limited network bandwidth and considerable communication overhead, communication efficiency has become an essential bottleneck for FL. Existing solutions attempt to improve this situation by reducing communication rounds while usually come with more computation resource consumption or model accuracy deterioration. In this paper, we propose a parameter Prediction-Based DL (PBFL). In which an extended Kalman filter-based prediction algorithm, a practical prediction error threshold setting mechanism and an effective global model updating strategy are included. Instead of collecting all updates from participants, PBFL takes advantage of predicting values to aggregate the model, which substantially reduces required communication rounds while guaranteeing model accuracy. Inspired by the idea of prediction, each participant checks whether its prediction value is out of the tolerance threshold limits and only uploads local updates that have an inaccurate prediction value. In this way, no additional local computational resources are required. Experimental results on both multilayer perceptrons and convolutional neural networks show that PBFL outperforms the state-of-the-art methods and improves the communication efficiency by >66% with 1% higher model accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.