Abstract

We propose a new globally convergent stochastic second-order method. Our starting point is the development of a new sketched Newton--Raphson (SNR) method for solving large scale nonlinear equations of the form $F(x)=0$ with $F:\mathbb{R}^p \rightarrow \mathbb{R}^m$. We then show how to design several stochastic second-order optimization methods by rewriting the optimization problem of interest as a system of nonlinear equations and applying SNR. For instance, by applying SNR to find a stationary point of a generalized linear model, we derive completely new and scalable stochastic second-order methods. We show that the resulting method is very competitive as compared to state-of-the-art variance reduced methods. Furthermore, using a variable splitting trick, we also show that the stochastic Newton method (SNM) is a special case of SNR and use this connection to establish the first global convergence theory of SNM. We establish the global convergence of SNR by showing that it is a variant of the online stochastic gradient descent (SGD) method, and then leveraging proof techniques of \textttSGD. As a special case, our theory also provides a new global convergence theory for the original Newton--Raphson method under strictly weaker assumptions as compared to the classic monotone convergence theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call