Abstract

Motivated by recent work of Renegar (A framework for applying subgradient methods to conic optimization problems, arXiv:1503.02611v2 , 2015) we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. Our problem of interest is the general convex optimization problem $$f^* = \min _{x \in Q} f(x)$$ , where we presume knowledge of a strict lower bound $$f_{\mathrm{slb}}< f^*$$ . [Indeed, $$f_{\mathrm{slb}}$$ is naturally known when optimizing many loss functions in statistics and machine learning (least-squares, logistic loss, exponential loss, total variation loss, etc.) as well as in Renegar’s transformed version of the standard conic optimization problem arXiv:1503.02611v2 , 2015; in all these cases one has $$f_{\mathrm{slb}}= 0 < f^*$$ .] We introduce a new functional measure called the growth constant G for $$f(\cdot )$$ , that measures how quickly the level sets of $$f(\cdot )$$ grow relative to the function value, and that plays a fundamental role in the complexity analysis. When $$f(\cdot )$$ is non-smooth, we present new computational guarantees for the Subgradient Descent Method and for smoothing methods, that can improve existing computational guarantees in several ways, most notably when the initial iterate $$x^0$$ is far from the optimal solution set. When $$f(\cdot )$$ is smooth, we present a scheme for periodically restarting the Accelerated Gradient Method that can also improve existing computational guarantees when $$x^0$$ is far from the optimal solution set, and in the presence of added structure we present a scheme using parametrically increased smoothing that further improves the associated computational guarantees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call