Abstract
Generalized gradient projection neural network models are proposed to solve nonsmooth convex and nonconvex nonlinear programming problems over a closed convex subset of ℝ n . By using Clarke’s generalized gradient, the neural network modeled by a differential inclusion is developed, and its dynamical behavior and optimization capabilities both for convex and nonconvex problems are rigorously analyzed in the framework of nonsmooth analysis and the differential inclusion theory. First for nonconvex optimization problems, the quasiconvergence results similar to those achieved by previous work are proved. For convex optimization problems, the global convergence results are proved, i.e. any trajectory of the neural network converges to an asymptotically stable equilibrium point, which is an optimal solution of the primal problem whenever it exists. This result remains valid even if the initial condition is chosen outside the feasible set. In addition, an asymptotic control result involving a Tykhonov-like regularization shows that any trajectory of the revised neural network can be forced to converge toward a particular optimal solution of the primal problem. Finally, two simulation optimization algorithms for solving the optimization and control problems are designed respectively, and three typical simulation experiments are illustrated to present the accuracy and efficiency of theoretical convergence results of this paper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.