Abstract

In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t --> infinity. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call