Abstract
This study focuses on a class of distributed optimization problems with non-consistent local set constraints in time-varying unbalanced networks. Considering that each agent only has access to local gradient information with random errors, this paper proposes a distributed stochastic gradient projection algorithm. Under some conditions of random errors and a uniformly joint strongly connected topology, it is shown that the local decision states of all agents converge to a common optimal solution with a probability of one, achieving a sublinear convergence rate of O(lnk/k). Notably, under the condition that at least one local function exhibits strong convexity, the algorithm achieves a faster sublinear convergence rate of O(1/k). These theoretical results are validated through numerical simulations. Furthermore, this paper applies the proposed algorithm to solve for the unknown parameters in distributed linear regression problems with incomplete data. Numerical results indicate that the algorithm not only aligns with the convergence properties reported in existing literature but also demonstrates significant superiority in convergence speed. This important discovery not only lays a solid theoretical foundation for the further application and extension of the algorithm but also reveals its immense potential in solving practical problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.