Abstract

A first step towards a rapid-learning algorithm is presented. The learning rules enable a network to learn new information from few training examples without destroying previously learned information. In order to learn from few training examples, the neural network must allow relatively large weight changes. The large changes have the effect that the network reproduces presented training examples. In order not to destroy previously learned information, the new learning rules should not change connections which have stabilized their connection weights. The authors propose to associate an additional value, called plasticity, with each connection, which indicates how much the connection weight can be adjusted. Simulations using the proposed learning rules demonstrate that they enable a network to learn rapidly to distinguish among several patterns. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call