Abstract

Sudharsanan and Sundareshan developed (1991) a neural-network model for bound constrained quadratic minimization and proved the global exponential convergence of their proposed neural network. The global exponential convergence is a critical property of the synthesized neural network for solving the optimization problem successfully. However, Davis and Pattison (1992) presented a counterexample to show that the proof given by Sudharsanan and Sundareshan for the global exponential convergence of the neural network is not correct. Bouzerdoum and Pattison (ibid., vol.4, no.2, p.293-303, 1993) then generalized the neural-network model given by Sudharsanan and Sundareshan and derived the global exponential convergence of the neural network under an appropriate condition. In this letter, we demonstrate through an example that the global exponential convergence condition given by Bouzerdoum and Pattison is not always satisfied by the quadratic minimization problem and show that the neural-network model under the global exponential convergence condition given by Bouzerdoum and Pattison is essentially restricted to contractive networks. Subsequently, a complete proof of the global exponential convergence of the neural-network models proposed by Sudharsanan and Sundareshan and Bouzerdoum and Pattison is given for the general case, without resorting to the global exponential convergence condition given by Bouzerdoum and Pattison. An illustrative simulation example is also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call