Abstract

Connectionist models using the back propagation learning rule are known to have a serious problem in that they exhibit catastrophic interference (or forgetting) with sequential training. After the model learns the first set of patterns, if the model is trained on another set of patterns, its performance on the first set dramatically deteriorates very rapidly. The present study reconsiders this issue with three sets of simulations. With orthogonal input vectors, interference can be reasonably mild. The number of hidden units was critical for the degree of interference, in contrast to suggestions of previous studies. Output coding scheme was found to be critical. The length of input lists also influenced the degree of interference. This study suggests that the interference problem has been overstated in the literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call