Abstract

Consider a set of agents collaboratively solving a distributed convex optimization problem asynchronously under stringent communication constraints. When an agent becomes active, it is allowed to communicate with only one of its neighbors. We propose new state-dependent gossip algorithms where the agents with maximal dissent average their estimates. We prove the almost sure convergence of max-dissent subgradient methods using a unified framework applicable to other state-dependent distributed optimization algorithms. Furthermore, our proof technique bypasses the need to establish the information flow between any two agents within a time interval of uniform length by intelligently studying the convergence properties of the Lyapunov function used in our analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call