Abstract

In neural networks, convergence in iterative learning is a common problem. For fast learning one should be able to control the rate of convergence. In the present paper, the single-layer perceptron model for two classes is considered where the rate of convergence is studied with several choices of the gain term in the updation rule. Experimental results on a number of two-class problems are reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call