Abstract
Despite the many advantages of using deep neural networks over shallow networks in various machine learning tasks, their effectiveness is compromised in a federated learning setting due to large storage sizes and high computational resource requirements for training. A large model size can potentially require infeasible amounts of data to be transmitted between the server and clients for training. To address these issues, we investigate the traditional and novel compression techniques to construct sparse models from dense networks whose storage and bandwidth requirements are significantly lower. We do this by separately considering compression techniques for the server model to address downstream communication and the client models to address upstream communication. Both of these play a crucial role in developing and maintaining sparsity across communication cycles. We empirically demonstrate the efficacy of the proposed schemes by testing their performance on standard datasets and verify that they outperform various state-of-the-art baseline schemes in terms of accuracy and communication volume.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.