Abstract

In this paper, the performance of a gradient neural network (GNN), which was designed intrinsically for solving static problems, is investigated, analyzed and simulated in the situation of time-varying coefficients. It is theoretically proved that the gradient neural network for online solution of time-varying quadratic minimization (QM) and quadratic programming (QP) problems could only approximately approach the time-varying theoretical solution, instead of converging exactly. That is, the steady-state error between the GNN solution and the theoretical solution can not decrease to zero. In order to understand the situation better, the upper bound of such an error is estimated firstly, and then the global exponential convergence rate is investigated for such a GNN when approaching an error bound. Computer-simulation results, including those based on a six-link robot manipulator, further substantiate the performance analysis of the GNN exploited to solve online time-varying QM and QP problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call