Abstract

This article considers constrained nonsmooth generalized convex and strongly convex optimization problems. For such problems, two novel distributed smoothing projection neurodynamic approaches (DSPNAs) are proposed to seek their optimal solutions with faster convergence rates in a distributed manner. First, we equivalently transform the original constrained optimal problem into a standard smoothing distributed problem with only local set constraints based on an exact penalty and smoothing approximation methods. Then, to deal with nonsmooth generally convex optimization, we propose a novel DSPNA based on continuous variant of Nesterov’s acceleration (called DSPNA-N), which has a faster convergence rate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O} ({1}/{t^{2}})$ </tex-math></inline-formula> , and we design a novel DSPNA inspired by the continuous variant of Polyak’s heavy ball method (called DSPNA-P) to address the nonsmooth strongly convex optimal problem with an explicit exponential convergent rate. In addition, the existence, uniqueness, and feasibility of the solution of our proposed DSPNAs are also provided. Finally, numerical results demonstrate the effectiveness of DSPNAs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.