Abstract

Stochastic algorithms have become more and more popular in the minimization of finite sums due to their efficiency and effectiveness. Recent advances include the stochastic average gradient algorithm, the stochastic variance reduced gradient algorithm, and the SAGA algorithm, a set of incremental gradient algorithm. However, both the stochastic average gradient algorithm and the SAGA algorithm require to store gradients for each sample, which is expensive and impractical especially in the case of large scale problems. To the best of our knowledge, existing memory-free algorithm like the stochastic variance reduced gradient algorithm might not be efficient (fast) enough in this case. Taking these into account, we propose a new optimisation algorithm in this class with low memory requirement but still achieves faster convergence rate than the state-of-the-art, called Practical SAGA. Remarkly, as a variant of the SAGA algorithm, the Practical SAGA algorithm enjoys the advantages of the SAGA algorithm, for example, supports non-strongly convex problems directly. Extensive experiments on four benchmarks show the efficiency and effectiveness of the Practical SAGA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call