Abstract

The backpropagation (BP) neural network has been widely used in many fields. However, it is still a great challenge to design the architecture and obtain optimal parameters for BP neural networks. For improving the generalization performance, regularization is the most popular technique to train the BP neural networks. In this paper, we propose a novel BP algorithm with graph regularization (BPGR) to obtain optimal parameters, by imposing the graph regularization term to the error function. The essential idea is to force the latent features of hidden layer to be more concentrated, which enhances the generalization performance. Besides, the proposed modified graph regularization facilitates the calculation of gradient and is more capable to penalize the extreme values of weights. Furthermore, the graph regularization can also be integrated with deep neural networks to improve their generalization performance. In addition, we provide the convergence analysis of our method BPGR under some regularity conditions. By comparison on several datasets with five activation functions, experimental results validate the theoretical analysis and demonstrate outstanding performance of BPGR.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.