Abstract

Multi-stage optimization which invokes a stochastic algorithm restarting with the returned solution of previous stage, has been widely employed in stochastic optimization. Momentum acceleration technique is famously known for building gradient-based algorithms with fast convergence in large-scale optimization. In order to take the advantage of this acceleration in multi-stage stochastic optimization, we develop a multi-stage stochastic gradient descent with momentum acceleration method, named MAGNET, for first-order stochastic convex optimization. The main ingredient is the employment of a negative momentum, which extends the Nesterov’s momentum to the multi-stage optimization. It can be incorporated in a stochastic gradient-based algorithm in multi-stage mechanism and provide acceleration. The proposed algorithm obtains an accelerated rate of convergence, and is adaptive and free from hyper-parameter tuning. The experimental results demonstrate that our algorithm is competitive with some state-of-the-art methods for solving several typical optimization problems in machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call