Abstract
Federated learning (FL) is a collaborative, privacy-preserving method for training deep neural networks at the edge of the Internet of Things (IoT). Despite the many advantages, existing FL implementations suffer high communication costs that prevent adoption at scale. Specifically, the frequent model updates between the central server and the many end nodes are a source of channel congestion and high energy consumption. This letter tackles this aspect by introducing federated learning with gradual layer freezing (FedGLF), a novel FL scheme that gradually reduces the portion of the model sent back and forth, relieving the communication bundle yet preserving the quality of the training service. The results collected on two image classification tasks learned with different data distributions prove that FedGLF outperforms conventional FL schemes, with data volume savings ranging from 14% to 59% or up to 2.5% higher accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.