Abstract
In this chapter, we focus on stochastic convex optimization problems which have found wide applications in machine learning. We will first study two classic methods, i.e., stochastic mirror descent and accelerated stochastic gradient descent methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have