Abstract

AbstractThis paper presents a recurrent neural network for solving nonconvex nonlinear optimization problems subject to nonlinear inequality constraints. First, the p‐power transformation is exploited for local convexification of the Lagrangian function in nonconvex nonlinear optimization problem. Next, the proposed neural network is constructed based on the Karush–Kuhn–Tucker (KKT) optimality conditions and the projection function. An important property of this neural network is that its equilibrium point corresponds to the optimal solution of the original problem. By utilizing an appropriate Lyapunov function, it is shown that the proposed neural network is stable in the sense of Lyapunov and convergent to the global optimal solution of the original problem. Also, the sensitivity of the convergence is analysed by changing the scaling factors. Compared with other existing neural networks for such problem, the proposed neural network has more advantages such as high accuracy of the obtained solutions, fast convergence, and low complexity. Finally, simulation results are provided to show the benefits of the proposed model, which compare to or outperform existing models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call