Conjugate gradient (CG), as an effective technique to speed up gradient descent algorithms, has shown great potential and has widely been used for large-scale machine-learning problems. However, CG and its variants have not been devised for the stochastic setting, which makes them extremely unstable, and even leads to divergence when using noisy gradients. This article develops a novel class of stable stochastic CG (SCG) algorithms with a faster convergence rate via the variance-reduced technique and an adaptive step size rule in the mini-batch setting. Actually, replacing the use of a line search in the CG-type approaches which is time-consuming, or even fails for SCG, this article considers using the random stabilized Barzilai-Borwein (RSBB) method to obtain an online step size. We rigorously analyze the convergence properties of the proposed algorithms and show that the proposed algorithms attain a linear convergence rate for both the strongly convex and nonconvex settings. Also, we show that the total complexity of the proposed algorithms matches that of modern stochastic optimization algorithms under different cases. Scores of numerical experiments on machine-learning problems demonstrate that the proposed algorithms outperform state-of-the-art stochastic optimization algorithms.
Read full abstract