Abstract

We consider a neural network approach to an inverse variational inequality which is assumed to have a non-empty set of solutions. In the case of gradient mappings, we prove that every trajectory of the network converges to the solution set in the case of convex potentials and if the solution set is singleton, the network is globally asymptotically stable at the equilibrium point. We also prove that if the network has a strongly convex potential, then the network is globally exponentially stable at the equilibrium point. Another purpose of this paper is to point out certain fatal mistakes in the paper by Zou et al. [A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing. 2016;173:1163–1168].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call