Abstract

In this chapter, we will study an important class of convex optimization problems whose objective function is given by the summation of many components. With wide applications in machine learning and distributed optimization, these problems can be viewed either as deterministic optimization problems with a special finite-sum structure or stochastic optimization problems with a discrete distribution. Accordingly, we will study two typical classes of randomized algorithms for solving them. While the first class incorporates random block decomposition into the dual space of the primal–dual methods for deterministic convex optimization, the second one employs variance reduction techniques into stochastic gradient descent methods for stochastic optimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.