Abstract

Abstract The relationship between generalization and learning error function is studied, while the training data is contaminated by noise. The random neural network is considered and the K-L information distance is used to estimate the error of the neural network. It is proved that the K-L information distance keeps the consistence with the generalization. The over-fitting is analyzed based on the K-L distance. A new learning error function is proposed, and the generalization error is estimated if this learning error function is applied. A simulation example is given to show that the proposed learning error function exactly improves the generalization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call