Abstract

Nonconvex Stochastic Optimization Nonconvex stochastic optimization problems arise in many machine learning problems, including deep learning. The stochastic gradient Hamiltonian Monte Carlo (SGHMC) is a variant of stochastic gradients with a momentum method in which a controlled and properly scaled Gaussian noise is added to the stochastic gradients to steer the iterates toward a global minimum. SGHMC has shown empirical success in practice for solving nonconvex stochastic optimization problems. In “Global convergence of stochastic gradient Hamiltonian Monte Carlo for nonconvex stochastic optimization: Nonasymptotic performance bounds and momentum-based acceleration,” Gao, Gürbüzbalaban, and Zhu provide, for the first time, the finite-time performance bounds for the global convergence of SGHMC in the context of both population and empirical risk minimization problems and show that acceleration with momentum is possible in the context of global nonconvex stochastic optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call