Abstract
In this article, the problem of online distributed optimization with a set constraint is solved by employing a network of agents. Each agent only has access to a local objective function and set constraint, and can only communicate with its neighbors via a digraph, which is not necessarily balanced. Moreover, agents do not have prior knowledge of their future objective functions. Different from existing works on online distributed optimization, we consider the scenario, where objective functions at each time step are nonconvex. To handle this challenge, we propose an online distributed algorithm based on the consensus algorithm and the mirror descent algorithm. Of particular interest is that regrets involving first-order optimality condition are used to measure the performance of the proposed algorithm. Under mild assumptions on the communication graph and objective functions, we prove that regrets grow sublinearly. Finally, a simulation example is worked out to demonstrate the effectiveness of our theoretical results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.