Abstract

In this paper, a distributed smoothing accelerated projection algorithm (DSAPA) is proposed to address constrained non-smooth convex optimization problems over undirected multi-agent networks in a distributed manner, where the objective function is free of the assumption of Lipschitz gradient or strong convexity. First, based on a distributed exact penalty method, the original optimization problem is translated to a problem of standard assignment without consensus constraints. Then, a novel DSAPA by combining the smoothing approximation with Nesterov's accelerated schemes is proposed. In addition, we provide a systematic analysis to derive an upper bound on the convergence rate in terms of the objective function based on penalty function and to choose the optimal step size accordingly. Our results demonstrate that the proposed DSAPA can reach <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$O(\frac{\log (k)}{k})$</tex-math></inline-formula> when the optimal step size is chosen. Finally, the effectiveness and correctness of the proposed algorithm are verified by numerical and practical application examples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.