Abstract
The method that stochastic gradient descent (SGD) algorithm uses to update the undetermined parameter in the iterative process can be viewed as a rudimentary forward Euler method in the perspective of numerical differentiation. In order to overcome the connatural imperfection of the forward differentiation rule and computation error of the SGD algorithm, a new algorithm is obtained by substituting the original updating rule in SGD with the Lagrange-step l-step-ahead differentiation rule. In addition, extensive experiments between the original SGD algorithm and the modified algorithm are conducted to analyze the convergence. Empirical results demonstrate that Lagrange-step 1-step-ahead can not be used to the SGD algorithm and the new algorithm does not converge. Theoretical analysis is given to explain this result.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.