Abstract

When the traditional BP neural network has a big size of neurons in its hidden layers, it can own a very strong ability in fitting practical complex objective functions, but simultaneously for the same reason, the over-fitting problem is almost inevitable and will be more serious when there is only a very restricted size of training data. A new BP neural network optimisation method is given based on dynamical regularization (DRBP) in this paper. Differing from the traditional regularization method with an invariant prior assumption, this proposed method carries out weight decaying with adjusting regularization parameter dynamically according to the stability of the network during the whole training process. The results of experiments represented in this paper have shown that our method can antagonise the over-fitting problem effectively, reinforcing the generalisation ability of the model, and as an obvious result, the classification accuracy on the testing data is promoted.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.