Abstract
This paper presents a theoretical boundedness and convergence analysis of online gradient method for the training of two-layer feedforward neural networks. The well-known linear difference equation is extended to apply to the general case of linear or nonlinear activation functions. Based on this extended difference equation, we investigate the boundedness and convergence of the parameter sequence of concern, which is trained by finite training samples with a constant learning rate. We show that the uniform upper bound of the parameter sequence, which is very important in the training procedure, is the solution of an inequality regarding the bound. It is further verified that, for the case of linear activation function, a solution always exists and, moreover, the parameter sequence can be uniformly upper bounded, while for the case of nonlinear activation function, some simple adjustment methods on the training set or the activation function can be derived to improve the boundedness property. Then, for the convergence analysis, it is shown that the parameter sequence can converge into a zone around an optimal solution at which the error function attains its global minimum, where the size of the zone is associated with the learning rate. Particularly, for the case of perfect modeling, a strong global convergence result, where the parameter sequence can always converge to an optimal solution, is proved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.