Abstract

The usual approach to developing and analyzing first-order methods for nonsmooth (stochastic or deterministic) convex optimization assumes that the objective function is uniformly Lipschitz continuous with parameter Mf. However, in many settings, the nondifferentiable convex function f is not uniformly Lipschitz continuous—for example, (i) the classical support vector machine problem, (ii) the problem of minimizing the maximum of convex quadratic functions, and even (iii) the univariate setting with [Formula: see text]. Herein, we develop a notion of “relative continuity” that is determined relative to a user-specified “reference function” h (that should be computationally tractable for algorithms), and we show that many nondifferentiable convex functions are relatively continuous with respect to a correspondingly fairly simple reference function h. We also similarly develop a notion of “relative stochastic continuity” for the stochastic setting. We analyze two standard algorithms—the (deterministic) mirror descent algorithm and the stochastic mirror descent algorithm—for solving optimization problems in these new settings, providing the first computational guarantees for instances where the objective function is not uniformly Lipschitz continuous. This paper is a companion paper for nondifferentiable convex optimization to the recent paper by Lu et al. [Lu H, Freund RM, Nesterov Y (2018) Relatively smooth convex optimization by first-order methods, and applications. SIAM J. Optim. 28(1): 333–354.], which developed analogous results for differentiable convex optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call