Abstract

Multilayer feedforward neural networks are trained using the error backpropagation (BP) algorithm. This algorithm minimizes the error between outputs of a neural network (NN) and training data. Hence, in the case of noisy training data, a trained network memorizes noisy outputs for given inputs. Such learning is called rote memorization learning (RML). In this paper we propose error correcting memorization learning (CML). It can suppress noise in training data. In order to evaluate generalization ability of CML, it is compared with the projection learning (PL) criterion. It is theoretically proved that although CML merely suppresses noise in training data, it provides the same generalization as PL under some necessary and sufficient condition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call