Abstract

With the rapid increase of data amount in many fields, such as machine learning and networked systems, optimization-based methods inevitably confront the computational issues, which can be well dealt by the stochastic optimization strategies. As one of the most fundamental stochastic optimization algorithms, stochastic gradient descent (SGD) has been intensively developed and employed in the machine learning in the past decade. But unfortunately, due to the technical difficulties, other SGD based algorithms which could achieve better performance, such as momentum-based SGD (mSGD), still lack theoretical basis. Based on this fact, in this paper, we prove that the mSGD algorithm is almost surely convergent at each trajectory. The convergence rate of mSGD is also analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call