Abstract

This article considers constrained nonsmooth generalized convex and strongly convex optimization problems. For such problems, two novel distributed smoothing projection neurodynamic approaches (DSPNAs) are proposed to seek their optimal solutions with faster convergence rates in a distributed manner. First, we equivalently transform the original constrained optimal problem into a standard smoothing distributed problem with only local set constraints based on an exact penalty and smoothing approximation methods. Then, to deal with nonsmooth generally convex optimization, we propose a novel DSPNA based on continuous variant of Nesterov’s acceleration (called DSPNA-N), which has a faster convergence rate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {O} ({1}/{t^{2}})$ </tex-math></inline-formula> , and we design a novel DSPNA inspired by the continuous variant of Polyak’s heavy ball method (called DSPNA-P) to address the nonsmooth strongly convex optimal problem with an explicit exponential convergent rate. In addition, the existence, uniqueness, and feasibility of the solution of our proposed DSPNAs are also provided. Finally, numerical results demonstrate the effectiveness of DSPNAs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call