Abstract
This article investigates the scheduling framework of the federated multitask learning (FMTL) problem with a hard-cooperation structure over wireless networks, in which the scheduling becomes more challenging due to the different convergence behaviors of different tasks. Based on the special model structure, we propose a dynamic user and task scheduling scheme with a block-wise incremental gradient aggregation algorithm, in which the neural network model is decomposed into a common feature-extraction module and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$M$ </tex-math></inline-formula> task-specific modules. Different block gradients with respect to different modules can be scheduled separately. We further propose a Lyapunov-drift-based scheduling scheme that minimizes the overall communication latency by utilizing both the instantaneous data importance and the channel state information. We prove that the proposed scheme can converge almost surely to a KKT solution of the training problem such that the data-distortion issue is resolved. Simulation results illustrate that the proposed scheme significantly reduces the communication latency compared to the state-of-the-art baseline schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.