Abstract

This paper presents a novel recurrent neural network for solving nonlinear convex programming problems subject to nonlinear inequality constraints. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution. Compared with the existing neural networks for solving such nonlinear optimization problems, the proposed neural network has two major advantages. One is that it can solve convex programming problems with general convex inequality constraints. Another is that it does not require a Lipschitz condition on the objective function and constraint function. Simulation results are given to illustrate further the global convergence and performance of the proposed neural network for constrained nonlinear optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call