Abstract
In recent years, wireless <i>federated learning</i> (FL) has been proposed to support the mobile intelligent applications over the wireless network, which protects the data privacy and security by exchanging the parameter between mobile devices and the <i>base station</i> (BS). However, the learning latency increases with the neural network scale due to the limited local computing power and communication bandwidth. To tackle this issue, we introduce model pruning for wireless FL to reduce the neural network scale. Device selection is also considered to further improve the learning performance. By removing the stragglers with low computing power or bad channel condition, the model aggregation loss caused by model pruning can be alleviated and the communication overhead can be effectively reduced. We analyze the convergence rate and learning latency of the proposed model pruning method and formulate an optimization problem to maximize the convergence rate under the given learning latency budget via jointly optimizing the pruning ratio, device selection, and wireless resource allocation. By solving the problem, the closed-form solutions of pruning ratio and wireless resource allocation are derived and the threshold-based device selection strategy is developed. Finally, extensive experiments are carried out to demonstrate that the proposed model pruning algorithm outperforms other existing schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.