Abstract
Nonlinear optimization problems where an objective function is given as the sum of multiple partial functions are important because they include many machine learning problems. Although some extensions of stochastic gradient descent are proposed for solving the problems, they have their respective disadvantages. In order to overcome the disadvantages, we previously proposed a method based on stochastic gradient descent. However, this method does not converge at a good solution when overestimating the average gradient of the partial functions. In addition, the initial step size is small, and therefore the convergence speed is reduced. In order to resolve these problems and realize faster convergence, in this paper, we propose a method which is an extension of our previous method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.