Abstract

Generalized gradient projection neural network models are proposed to solve nonsmooth convex and nonconvex nonlinear programming problems over a closed convex subset of ℝ n . By using Clarke’s generalized gradient, the neural network modeled by a differential inclusion is developed, and its dynamical behavior and optimization capabilities both for convex and nonconvex problems are rigorously analyzed in the framework of nonsmooth analysis and the differential inclusion theory. First for nonconvex optimization problems, the quasiconvergence results similar to those achieved by previous work are proved. For convex optimization problems, the global convergence results are proved, i.e. any trajectory of the neural network converges to an asymptotically stable equilibrium point, which is an optimal solution of the primal problem whenever it exists. This result remains valid even if the initial condition is chosen outside the feasible set. In addition, an asymptotic control result involving a Tykhonov-like regularization shows that any trajectory of the revised neural network can be forced to converge toward a particular optimal solution of the primal problem. Finally, two simulation optimization algorithms for solving the optimization and control problems are designed respectively, and three typical simulation experiments are illustrated to present the accuracy and efficiency of theoretical convergence results of this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call