Abstract

In this article, a single-layer projection neural network based on penalty function and differential inclusion is proposed to solve nonsmooth pseudoconvex optimization problems with linear equality and convex inequality constraints, and the bound constraints, such as box and sphere types, in inequality constraints are processed by projection operator. By introducing the Tikhonov-like regularization method, the proposed neural network no longer needs to calculate the exact penalty parameters. Under mild assumptions, by nonsmooth analysis, it is proved that the state solution of the proposed neural network is always bounded and globally exists, and enters the constrained feasible region in a finite time, and never escapes from this region again. Finally, the state solution converges to an optimal solution for the considered optimization problem. Compared with some other existing neural networks based on subgradients, this algorithm eliminates the dependence on the selection of the initial point, which is a neural network model with a simple structure and low calculation load. Three numerical experiments and two application examples are used to illustrate the global convergence and effectiveness of the proposed neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call