Abstract

In this chapter, we focus on stochastic convex optimization problems which have found wide applications in machine learning. We will first study two classic methods, i.e., stochastic mirror descent and accelerated stochastic gradient descent methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call