Abstract

We present a formal evaluation of the effect of weight decay training for backpropagation on noisy data sets. Weight decay training is suggested as an implementation for achieving a robust neural network which is insensitive to noise. We investigate the noisy situations of noisy training set–clean test set, clean training set–noisy test set, and noisy training set–noisy test set. Statistically speaking, there is strong evidence indicating that the noisy situation of noisy training set–clean test set provides more accurate prediction than the other two noisy situations. This finding suggests the relative importance of maintaining to-be-predicted cases noise-free for neural network classification. Furthermore, experimental results show that weight decay training is at least as good as standard backpropagation in noisy situations and, in some data sets, weight decay training outperforms standard backpropagation by a significant difference. However, for clean data sets there is no significant difference between weight decay training and standard backpropagation. Another interesting finding in this study is the effect of the number of training epochs on weight decay training and standard backpropagation in noisy situations. Weight decay training can achieve convergence after a short training. For the same short training, weight decay training usually outperforms standard backpropagation. When additional training has a significant effect on performance, it is to improve standard backpropagation but deteriorate weight decay training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call