Abstract
Nonconvex Stochastic Optimization Nonconvex stochastic optimization problems arise in many machine learning problems, including deep learning. The stochastic gradient Hamiltonian Monte Carlo (SGHMC) is a variant of stochastic gradients with a momentum method in which a controlled and properly scaled Gaussian noise is added to the stochastic gradients to steer the iterates toward a global minimum. SGHMC has shown empirical success in practice for solving nonconvex stochastic optimization problems. In “Global convergence of stochastic gradient Hamiltonian Monte Carlo for nonconvex stochastic optimization: Nonasymptotic performance bounds and momentum-based acceleration,” Gao, Gürbüzbalaban, and Zhu provide, for the first time, the finite-time performance bounds for the global convergence of SGHMC in the context of both population and empirical risk minimization problems and show that acceleration with momentum is possible in the context of global nonconvex stochastic optimization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.