We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for the smooth component from time to time. As a consequence, these algorithms require only $$\mathcal{O}(1/\sqrt{\epsilon })$$O(1/∈) gradient evaluations for the smooth component in order to find an $$\epsilon $$∈-solution for the composite problem, while still maintaining the optimal $$\mathcal{O}(1/\epsilon ^2)$$O(1/∈2) bound on the total number of subgradient evaluations for the nonsmooth component. We then present a stochastic counterpart for these algorithms and establish similar complexity bounds for solving an important class of stochastic composite optimization problems. Moreover, if the smooth component in the composite function is strongly convex, the developed gradient sliding algorithms can significantly reduce the number of graduate and subgradient evaluations for the smooth and nonsmooth component to $$\mathcal{O} (\log (1/\epsilon ))$$O(log(1/∈)) and $$\mathcal{O}(1/\epsilon )$$O(1/∈), respectively. Finally, we generalize these algorithms to the case when the smooth component is replaced by a nonsmooth one possessing a certain bi-linear saddle point structure.
Read full abstract